New paper in Collabra!

With Craig Speelman, Laura Parker, and Benjamin Rapley.

We looked at prevalence of uncritically reported aggregate statistics in 3 Q1 psychology journals in 2020.

TLDR: There's lots, but also possibly some variation between journals.

A pleasure to deal with Collabra:Psychology

#OpenScience #ergodicity #ErgodicFallacy

Most Psychological Researchers Assume Their Samples Are Ergodic: Evidence From a Year of Articles in Three Major Journals

https://online.ucpress.edu/collabra/article/10/1/92888/200006/Most-Psychological-Researchers-Assume-Their

Most Psychological Researchers Assume Their Samples Are Ergodic: Evidence From a Year of Articles in Three Major Journals

Conventional statistics methods in most psychological research, such as null-hypothesis significance tests (NHSTs), use aggregated values (i.e., the sample means) of group behaviours to make inferences about individuals. Such inferences are possibly erroneous because groups of humans rarely, if ever, constitute an ergodic system. To assume ergodicity without checking is to commit the ‘ergodic fallacy’. The aim of the current study was to examine the prevalence of this error in contemporary psychological research. We analysed three highly cited ‘Q1’ journals in the fields of clinical, educational and cognitive psychology for statements that indicated this error. As hypothesized, the ergodic fallacy was found in the vast majority of the papers investigated here. We also hypothesised that the prevalence of this error would be highest in cognitive psychology papers because this field typically assesses theoretical claims about universal cognitive mechanisms, whereas clinical and educational psychology are more concerned with empirically supported interventions. This hypothesis was also supported by our results. Nonetheless, the prevalence of the ergodic fallacy was still high in all fields. Implications are discussed with respect to the reporting of research findings and the validity of theories in psychology.

University of California Press
@MarekMcGann My assumption (as a non expert!) was that including participant as a random effect in mixed effects models was supposed to check whether any effect broadly speaking generalises across the individuals. Is that at least partly true?!
The generalizability crisis

Most theories and hypotheses in psychology are verbal in nature, yet their evaluation over-whelmingly relies on inferential statistical procedures. The validity of the move from qualitative to quantitative analysis depends on the verbal and statistical ...

PubMed Central (PMC)

@MarekMcGann @Benambridge Marek, I didn’t have time to read all of the Yarkoni blog post, but I don’t see how it addresses (even in principle) Ben’s point? The specific content of our theories just doesn’t feature in (classical) inferential statistical tests. They are simply about the prob. of rejecting H_0, no?

So the whole argument on the logical relationship between data and H_1 and what type of reasoning we might or might not be engaging in seems by the by to me…

what am I missing?

@UlrikeHahn @Benambridge
No, you're right, sorry Ulrike, and Ben. I was out walking with the kids sending the response and wasn't thinking straight. I tend to tack the blog post on to citations of the paper almost by habit. The induction point isn't really relevant.