It always amazes me how much we can explain using mixed-effects models.

#Statistics #modeling #lmer #rstats #lme4 #Bioinformatics #DataScience

Thanks to #rstats and #lme4 for helping me do frequentist nonsense in a more compute-friendly fashion.

Today I noticed that the AIC values included in the model table when I do a likelihood ratio test of two LME models do not match the AIC values I get for those models if I use the AIC() function. Is this common knowledge? Is there a good reason (e.g. something from math stat I've forgotten that makes these questions not the same)?

Here's a code snippet demonstrating. https://gist.github.com/emjonaitis/237402a6d5c6e9338aeebebbd121eba3

#rstats #lme4

aic_inconsistency.R

GitHub Gist: instantly share code, notes, and snippets.

Gist

I regard myself as relatively technologically proficient and often an early adopter. Where many within my field would do their analyses in SPSS and write their articles in Word (which is fine), I prefer a workflow with, say, #RStudio and #Quarto, write analyses and text as a single reproducible document, and collaborate with #GitHub. Still, the whole generative AI thing has always... repelled me, and even more so for any kind of work within #academia. But I don’t find it easy to explain exactly what bothers me.

An important aspect of it is the «black box» thing. Scientific work should be transparent and reproducible. Output from an #LLM is anything but.

Another thing is watching colleagues get «coding advice» from an LLM for their statistical analyses that I immediately see will not run. Where, say, #lme4 syntax and #lavaan syntax is mixed up.

Third, I’ve seen horrendous examples where students ask LLMs to find research for them, with the LLM «digging up» one fictitious article after another, with fictitious results, sometimes with actual names of actual researchers, delivered with confidence. Admittedly, that was some while ago.
2/3

#academia #education #LLM #generativeAI #AIhype #rstats

Exciting News! 📚 Our work on Reliability and Feasibility of Linear Mixed Models in Fully Crossed Experimental Designs published in AMPPS! 🎉 #R #lme4 #MixedModels @Scandle & @letstido @universityofleeds

https://journals.sagepub.com/doi/10.1177/25152459231214454

We present #recommendations and a clear #pipeline for handling #random effects in the presence of non-convergent and singular models. No more reduced models causing first-type errors due to data pseudoreplication!

What model #statistics should one report after using multiple #imputation and #multilevel regressions, and how are they obtained? I'm using the #mice package in #rstats, and #lme4 on each imputed dataset. When pooling results, summary() yields what I need for each model term, but nothing for the whole model. If I didn't impute but deleted listwise, I would normally report AIC, BIC, Loglik. These are all in the mipo object, for each result for each imputed dataset, but they're not pooled. I'm sure I'm missing something here. Does anyone know an example article where such results are presented neatly?
Hey #mixedmodel #rstats peeps - anyone know of a package or function that does for #glmmTMB what merTools::predictInterval() does for #lme4? #wantingToMoveEverythingOver

I have written a book draft of an introduction to #multilevel modeling, entitled #Multilevel Thinking: https://agrogan1.github.io/multilevel-thinking/. Comments, questions and corrections are appreciated, as are suggestions for a possible publisher.

While applicable to many different software programs, the book is currently centered around the use of #Stata, but I hope to extend it to use of #rstats (#lme4) and #julialang

Multilevel Thinking

Discovering Diversity, Universals, and Particulars in Cross-Cultural Research

A mixed effects question for you #rstats #lme4 people: Persons are organised in groups for some time. No movement between groups. We have self-ratings of a Trait, and all persons in the group rate all other persons in the same group on a Behavior (but larger n/groups than in the picture). I want to predict Behavior by Trait. My first thought is a linear mixed-effects model with two random terms: Behavior ~ Trait + (1 | Subject) + (1 | Rater) Would that work? This is new territory for me.
What's the standard package in Python for doing #MixedEffectsModels instead of #R? I need to use Python unfortunately but want something with #lme4 / #lmer / #brms functionality