🚨 preprint alert 🚨
Initial evidence that using #statcheck in peer review may reduce errors.
Results from a preregistered observational study of 7000+ psychology articles.
/w @JelteWicherts
🧵
🚨 preprint alert 🚨
Initial evidence that using #statcheck in peer review may reduce errors.
Results from a preregistered observational study of 7000+ psychology articles.
/w @JelteWicherts
🧵
Background: ~50% of published psych articles with stats have at least one p-value that is inconsistent with their degrees of freedom and test statistic, and in ~12.5% of articles this may affect conclusions about statistical significance.
See: https://link.springer.com/article/10.3758/s13428-015-0664-2
This study documents reporting errors in a sample of over 250,000 p-values reported in eight major psychology journals from 1985 until 2013, using the new R package “statcheck.” statcheck retrieved null-hypothesis significance testing (NHST) results from over half of the articles from this period. In line with earlier research, we found that half of all published psychology papers that use NHST contained at least one p-value that was inconsistent with its test statistic and degrees of freedom. One in eight papers contained a grossly inconsistent p-value that may have affected the statistical conclusion. In contrast to earlier findings, we found that the average prevalence of inconsistent p-values has been stable over the years or has declined. The prevalence of gross inconsistencies was higher in p-values reported as significant than in p-values reported as nonsignificant. This could indicate a systematic bias in favor of significant results. Possible solutions for the high prevalence of reporting inconsistencies could be to encourage sharing data, to let co-authors check results in a so-called “co-pilot model,” and to use statcheck to flag possible inconsistencies in one’s own manuscript or during the review process.
We compared statistical inconsistencies in 2 journals that implemented #statcheck in peer review and 2 matched controls, before and after statcheck implementation:
- Psych Science (🤖) vs. Journal of Exp Psych: General
- Journal of Exp Soc Psych (🤖) vs. Journal of Pers & Soc Psy
These results provide initial evidence that using #statcheck in peer review may be a successful intervention to decrease statistical reporting inconsistencies. 🤖✅
but... >>
🔢 You can find the preregistration, data, and R code here: https://osf.io/q84jn/
🤖 Interested in #statcheck? Check out the latest version on http://statcheck.io.
We compare the prevalence of statistical reporting inconsistencies between journals that implemented statcheck in their peer review process and matched control journals, before and after statcheck implementation. Hosted on the Open Science Framework
@regretlab I'm sorry to hear that! #statcheck is supposed to take correct rounding of the test stat into account.
I did recently fix a bug where the correct rounding wasn't taken into account with negative test stats, maybe that was the case for you?
And this is also why #statcheck results should not be followed blindly, and why I hope editors, reviewers, and authors collaborate on reducing errors together.
P.S. If you do run into a bug like that again, plz let me know! 🙏