#statstab #401 Common issues, conundrums, and other things that might come up when implementing mixed models
Thoughts: GLMMs are cool, but come with their own quirks.
#statstab #401 Common issues, conundrums, and other things that might come up when implementing mixed models
Thoughts: GLMMs are cool, but come with their own quirks.
#statstab #398 Eta^2 for bayesian models {effectsize}
Thoughts: Great resource, but scroll to "Eta Squared from Posterior Predictive Distribution"
Functions to compute effect size measures for ANOVAs, such as Eta- (\(\eta\)), Omega- (\(\omega\)) and Epsilon- (\(\epsilon\)) squared, and Cohen's f (or their partialled versions) for ANOVA tables. These indices represent an estimate of how much variance in the response variables is accounted for by the explanatory variable(s). When passing models, effect sizes are computed using the sums of squares obtained from anova(model) which might not always be appropriate. See details.
New on the blog: Using Bayesian tools to be a better frequentist
Turns out that for negative binomial regression with small samples, standard frequentist tools fail to achieve their stated goals. Bayesian computation ends up providing better frequentist guarantees. Not sure this is a general phenomenon, just a specific example.
https://www.martinmodrak.cz/2025/07/09/using-bayesian-tools-to-be-a-better-frequentist/
okay #rstats #rstan #stan hivemind:
do you have any examples of Stan models (incl #brms) running in production, especially attached to Shiny apps where responsiveness/compute time is pretty important (and interfacing with non-quant people)?
What tricks do you use?
Please send blogs, packages, repos, anecdotes! :)
Please do not send: suggestions that I use an empirical Bayes/frequentist framework. I know how to do that :)
#statstab #350 Communicating causal effect heterogeneity
By @matti
Thoughts: Cool guide on properly communicating uncertainty in effects.
#bayesian #uncertainty #ggplot #r #brms #tidybayes #heterogeneity
#statstab #328 How to Assess Task Reliability using Bayesian Mixed Models
by @Dom_Makowski
Thoughts: Nice walkthrough using {brms}, with code, data gen, and plots.
#r #bayesian #mixedeffects #reliability #brms
https://realitybending.github.io/post/2024-03-18-signaltonoisemixed/
#statstab #299 The role of "max_treedepth" in No-U-Turn?
Thoughts: Once you start using more complex models you will run into issues at some point; this is one; good solution guide.
#brms #bayesian #modeling #stats #issues #solutions #stan #forum
https://discourse.mc-stan.org/t/the-role-of-max-treedepth-in-no-u-turn/24155
Hi, I am working on a model where I saw the treedepth could reach 14, leading to a slow fitting. I wanted to know better the role of this parameter, thus I tested a simpler model with different max_treedepth values. It seems for me that too low max_treedepth would lead to inefficient sampling (low ESS). Could someone explain how max_treedepth works in the algorithm? What is the appropriate value of max_treedepth when use the stan() function in rstan package? Thank you.
Incidentally, our companion #rstats Reacnorm package is now live in CRAN, so it's as easy as `install.packages("Reacnorm")` and `vignette("TutoReacnorm")` to access our nice tutorial on analyse reaction norms using the #brms and Reacnorm package.
Partitions the phenotypic variance of a plastic trait, studied through its reaction norm. The variance partition distinguishes between the variance arising from the average shape of the reaction norms (V_Plas) and the (additive) genetic variance . The latter is itself separated into an environment-blind component (V_G/V_A) and the component arising from plasticity (V_GxE/V_AxE). The package also provides a way to further partition V_Plas into aspects (slope/curvature) of the shape of the average reaction norm (pi-decomposition) and partition V_Add (gamma-decomposition) and V_AxE (iota-decomposition) into the impact of genetic variation in the reaction norm parameters. Reference: de Villemereuil & Chevin (2025) <<a href="https://doi.org/10.32942%2FX2NC8B" target="_top">doi:10.32942/X2NC8B</a>>.