#statstab #401 Common issues, conundrums, and other things that might come up when implementing mixed models

Thoughts: GLMMs are cool, but come with their own quirks.

#glmm #lmer #brms #mixedeffects #hierarchicalmodels #r

https://m-clark.github.io/mixed-models-with-R/issues.html

Issues | Mixed Models with R

This is an introduction to using mixed models in R. It covers the most common techniques employed, with demonstration primarily via the lme4 package. Discussion includes extensions into generalized mixed models, Bayesian approaches, and realms beyond.

#statstab #398 Eta^2 for bayesian models {effectsize}

Thoughts: Great resource, but scroll to "Eta Squared from Posterior Predictive Distribution"

#effectsize #eta2 #bayesian #brms #r

https://easystats.github.io/effectsize/reference/eta_squared.html#eta-squared-from-posterior-predictive-distribution

\(\eta^2\) and Other Effect Size for ANOVA β€” eta_squared

Functions to compute effect size measures for ANOVAs, such as Eta- (\(\eta\)), Omega- (\(\omega\)) and Epsilon- (\(\epsilon\)) squared, and Cohen's f (or their partialled versions) for ANOVA tables. These indices represent an estimate of how much variance in the response variables is accounted for by the explanatory variable(s). When passing models, effect sizes are computed using the sums of squares obtained from anova(model) which might not always be appropriate. See details.

New on the blog: Using Bayesian tools to be a better frequentist

Turns out that for negative binomial regression with small samples, standard frequentist tools fail to achieve their stated goals. Bayesian computation ends up providing better frequentist guarantees. Not sure this is a general phenomenon, just a specific example.

https://www.martinmodrak.cz/2025/07/09/using-bayesian-tools-to-be-a-better-frequentist/

#rstats #Bayesian #brms #stan

okay #rstats #rstan #stan hivemind:

do you have any examples of Stan models (incl #brms) running in production, especially attached to Shiny apps where responsiveness/compute time is pretty important (and interfacing with non-quant people)?

What tricks do you use?

Please send blogs, packages, repos, anecdotes! :)

Please do not send: suggestions that I use an empirical Bayes/frequentist framework. I know how to do that :)

I now understand better why wiener waffeln exist.... every time my initial values are rejected I eat one
#rstats #brms #rstan #ddm

#statstab #350 Communicating causal effect heterogeneity
By @matti

Thoughts: Cool guide on properly communicating uncertainty in effects.

#bayesian #uncertainty #ggplot #r #brms #tidybayes #heterogeneity

https://vuorre.com/heterogeneity-uncertainty/

Communicating causal effect heterogeneity

#statstab #328 How to Assess Task Reliability using Bayesian Mixed Models
by @Dom_Makowski

Thoughts: Nice walkthrough using {brms}, with code, data gen, and plots.

#r #bayesian #mixedeffects #reliability #brms

https://realitybending.github.io/post/2024-03-18-signaltonoisemixed/

How to Assess Task Reliability using Bayesian Mixed Models | Reality Bending Lab

Task reliability in assessing inter-individual differences is a key issue for differential psychology and neuropsychology.

Reality Bending Lab

#statstab #299 The role of "max_treedepth" in No-U-Turn?

Thoughts: Once you start using more complex models you will run into issues at some point; this is one; good solution guide.

#brms #bayesian #modeling #stats #issues #solutions #stan #forum

https://discourse.mc-stan.org/t/the-role-of-max-treedepth-in-no-u-turn/24155

The role of "max_treedepth" in No-U-Turn?

Hi, I am working on a model where I saw the treedepth could reach 14, leading to a slow fitting. I wanted to know better the role of this parameter, thus I tested a simpler model with different max_treedepth values. It seems for me that too low max_treedepth would lead to inefficient sampling (low ESS). Could someone explain how max_treedepth works in the algorithm? What is the appropriate value of max_treedepth when use the stan() function in rstan package? Thank you.

The Stan Forums

Incidentally, our companion #rstats Reacnorm package is now live in CRAN, so it's as easy as `install.packages("Reacnorm")` and `vignette("TutoReacnorm")` to access our nice tutorial on analyse reaction norms using the #brms and Reacnorm package.

https://cran.r-project.org/package=Reacnorm

Reacnorm: Perform a Partition of Variance of Reaction Norms

Partitions the phenotypic variance of a plastic trait, studied through its reaction norm. The variance partition distinguishes between the variance arising from the average shape of the reaction norms (V_Plas) and the (additive) genetic variance . The latter is itself separated into an environment-blind component (V_G/V_A) and the component arising from plasticity (V_GxE/V_AxE). The package also provides a way to further partition V_Plas into aspects (slope/curvature) of the shape of the average reaction norm (pi-decomposition) and partition V_Add (gamma-decomposition) and V_AxE (iota-decomposition) into the impact of genetic variation in the reaction norm parameters. Reference: de Villemereuil & Chevin (2025) <<a href="https://doi.org/10.32942%2FX2NC8B" target="_top">doi:10.32942/X2NC8B</a>>.

what are your best tips to fit shifted lognormal models (in #brms / Stan)? I'm using: - checking the long tails (few long RTs make the tail estimation unwieldy) - low initial values for ndt - careful prior checks - pathfinder estimation of initial values still with increasing data, chains get stuck
Bluesky

Bluesky Social