Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions birthdays/birthdays.R
Original file line number Diff line number Diff line change
Expand Up @@ -305,7 +305,7 @@ ldraws1 |>
#' Here `khat` is larger than 0.7 indicating that importance sampling
#' even with Pareto smoothing is not able to provide accurate
#' adjustment. `min_ss` indicates how many draws would be needed to
#' get an accurate importance weighting adjustment, and in this that
#' get an accurate importance weighting adjustment, and in this case the
#' number is impractically big. Even the Laplace approximation can be
#' useful, this diagnostic shows that we would eventually want to run
#' MCMC for more accurate inference.
Expand Down Expand Up @@ -626,7 +626,7 @@ compose_3panel(pf, pf1, pf2)
#' Seasonal component has reasonable fit to the data.
#'

#' Compare the mean and sd of parameters from Pathfinder and MCMC.
#' Compare the mean and sd of parameters from Pathfinder and MCMC.
#| label: fig-births-pth2-vs-fit2
variables <- names(model2$variables()$parameters)
sp <- summarise_draws(subset(pdraws2, variable = variables))
Expand Down Expand Up @@ -748,7 +748,7 @@ compose_4panel(pf, pf1, pf2, pf3)
#' Weekday effects are easy to estimate as there are about thousand
#' observations per weekday.
#'
#' Compare the mean and sd of parameters from Pathfinder and MCMC.
#' Compare the mean and sd of parameters from Pathfinder and MCMC.
#| label: fig-births-pth3-vs-fit3
variables <- names(model3$variables()$parameters)
sp <- summarise_draws(subset(pdraws3, variable = variables))
Expand Down Expand Up @@ -883,7 +883,7 @@ compose_4panel(pf, pf1, pf2, pf3b)
#' relative number of births, that is, it is able to model the
#' increasing weekend effect.
#'
#' Compare the mean and sd of parameters from Pathfinder and MCMC.
#' Compare the mean and sd of parameters from Pathfinder and MCMC.
#| label: fig-births-pth4-vs-fit4
variables <- names(model4$variables()$parameters)
sp <- summarise_draws(subset(pdraws4, variable = variables))
Expand Down Expand Up @@ -1113,7 +1113,7 @@ compose_6panel(pf, pf1, pf2, pf3, pf2b)
#' Compare the mean and sd of parameters from Pathfinder and
#' MCMC. In this case, we are using the non-resampled Pathfinder draws
#' (the resampled draws had only one distinct draw).
#' Compare the mean and sd of parameters from Pathfinder and MCMC. We see that MCMC estimates of sd for some parameters is super high, indicating bad model. Instead of trying the get the computation work better, we drop this model at the moment.
#' Compare the mean and sd of parameters from Pathfinder and MCMC. We see that MCMC estimates of sd for some parameters are super high, indicating a bad model. Instead of trying to get the computation to work better, we drop this model at the moment.
#| label: fig-births-pth5-vs-fit5
variables <- names(model5$variables()$parameters)
sp <- summarise_draws(subset(pth5$draws(), variable = variables))
Expand Down
2 changes: 1 addition & 1 deletion misc/chapter_08/section_08_02/kilpis_ppc.R
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ data_lin <- data.frame(year = data_kilpis$year,

#' In this case, we are happy with the default prior for the
#' intercept. In this specific case, the flat prior on coefficient is
#' also fine, but we add an weakly informative prior just for the
#' also fine, but we add a weakly informative prior just for the
#' illustration. Let's assume we expect the temperature to change less
#' than 1°C in 10 years. With `student_t(3, 0, 0.03)` about 95% prior
#' mass has less than 0.1°C change in year, and with low degrees of
Expand Down
4 changes: 2 additions & 2 deletions nabiximols/nabiximols.R
Original file line number Diff line number Diff line change
Expand Up @@ -151,7 +151,7 @@ fit_normal <- brm(formula = cu ~ group*week + (1 | id),
fit_normal <- add_criterion(fit_normal, criterion = "loo", save_psis = TRUE,
moment_match = TRUE)

#' The second provided models is binomial model with the number of
#' The second provided model is a binomial model with the number of
#' trials being $28$ for each outcome (`cu`)
#| label: fit_binomial
#| results: hide
Expand Down Expand Up @@ -1081,7 +1081,7 @@ loo_compare(fit_betabinomial2b, fit_betabinomial3b)
#' for a new 4-week period is big, that is the predictive
#' distribution is very wide and due to the constrained range
#' has also thick tails (actually U shape), which makes the log
#' score not to be sensitive in tails.
#' score not sensitive in the tails.
#'
#' As the predictive distribution is wide with thick tails, we can
#' also focus on comparing absolute error of using means of the
Expand Down
2 changes: 1 addition & 1 deletion park_rule/park_rule.R
Original file line number Diff line number Diff line change
Expand Up @@ -263,7 +263,7 @@ print(fit_1)
fit_1$sampler_diagnostics() |> as_draws_rvars()

#' We are specifically interested in how efficient each Hamiltonian
#' Monte Carlo iteration is. This can be measured by the the number of
#' Monte Carlo iteration is. This can be measured by the number of
#' leapfrog steps `n_leapfrog__`, which is close to the number of log
#' density and gradient evaluations. Instead of examining
#' `n_leapfrog__` directly, it is common to examine `treedepth__` as
Expand Down
4 changes: 2 additions & 2 deletions planetary_motion/planetary_motion.R
Original file line number Diff line number Diff line change
Expand Up @@ -822,9 +822,9 @@ ppc_plot2D(fit2p, data_pred = data_pred, plot_star = TRUE)
#'
#' The Pathfinder algorithm can be used to find many modes and obtain
#' approximate posterior draws. If the Pareto-$\hat{k}$ diagnostic for
#' the Pathfinder approximation looks good, then the after the
#' the Pathfinder approximation looks good, then after the
#' importance sampling those draws would be good enough and there
#' would not be need to run MCMC. Pathfinder provides a great way to
#' would not be a need to run MCMC. Pathfinder provides a great way to
#' fit and fail fast. Further posterior draws can be obtained using
#' MCMC with Pathfinder initialization.
#'
Expand Down
6 changes: 3 additions & 3 deletions roaches/roaches.R
Original file line number Diff line number Diff line change
Expand Up @@ -391,7 +391,7 @@ loo_compare(list(`Poisson` = loo_p1, `Neg-bin` = loo_nb1))
#' cross validation model comparison, we could have also seen that
#' Poisson is not a good model by looking at the posterior of the
#' over-dispersion parameter (which gets very small values), and there
#' would not have been need to fit Poisson model at all.
#' would not have been a need to fit the Poisson model at all.
#| label: fig-posterior-nb_dispersion
#| fig-height: 2
#| fig-width: 6
Expand Down Expand Up @@ -479,7 +479,7 @@ fit_pvi <- add_criterion(fit_pvi, criterion = "loo")
loo(fit_pvi)

#' `p_loo` is about 164, which is less than the number of parameters
#' 267, but it is relatively large compared to to the number of
#' 267, but it is relatively large compared to the number of
#' observations (`p_loo >>N/5`), which indicates very flexible
#' model. In this case, this is due to having an intercept parameter for
#' each observation. Removing one observation changes the posterior
Expand Down Expand Up @@ -817,7 +817,7 @@ autoplot(rd) +
#' be see in LOO-PIT values. Both negative-binomial and zero-inflated
#' negative binomial are close enough the LOO-PIT can't see
#' discrepancy from the data, but elpd_loo and calibration plot were
#' able to to show that zero-inflation component improves the
#' able to show that zero-inflation component improves the
#' predictive accuracy and calibration.
#'
#' ## Analyse posterior
Expand Down
2 changes: 1 addition & 1 deletion sbc/sbc.R
Original file line number Diff line number Diff line change
Expand Up @@ -759,7 +759,7 @@ combined_ranks / combined_ecdf
#' divergences, but just to complete our tour of possibilities, we'll
#' show one more option to dealing with this type of problem.
#'
#' The general idea is that although we might not want to/be able to
#' The general idea is that although we might not want to or be able to
#' express our prior belief about the model (here that the two mixture
#' components are distinct) by priors on model parameters, we still
#' may be able to express our prior belief about the data itself.
Expand Down
2 changes: 1 addition & 1 deletion sleep_study/sleep_study.R
Original file line number Diff line number Diff line change
Expand Up @@ -614,7 +614,7 @@ loo(fit4, fit5)
#' - In theory, we would not need the
#' exponential link on sigma but then we had to care for the positivity of the
#' varying intercepts on sigma and hence would have to specify, for example, an
#' hierarchical Gamma rather than an hierarchical normal prior.
#' hierarchical Gamma rather than a hierarchical normal prior.
#' - Look at prior predictions for the correlations to demonstrate
#' the effect of the LKJ prior for larger than 2x2 matrices
#'
Expand Down