It must be very hard to publish null results https://d.repec.org/n?u=RePEc:zbw:i4rdps:281&r=&r=cmp
"Most of the statistical tests … are severely Underpowered … implies that the majority of null hypothesis significance tests we run should yield #nullResults. Yet, the published literature tells a very different story.
… central mechanism behind this gap is selection on #significance, a filter that makes statistically significant estimates more likely to appear in print than null results.
… published estimates are afflicted by a “winner’s curse” that biases them away from zero, researchers commit more Type I error, metaanalyses become unreliable, and published studies are less likely to be replicable. When null results remain hidden in the file drawer, researchers may waste time and money studying the same ineffective interventions over and over
… share of pure null results is extremely small: fewer than 2% of abstracts report only null findings. By contrast, over 90% of articles that rely on statistical methods prominently claim to reject at least one null hypothesis.
… estimates that are statistically distinguishable from zero must be at least one order of magnitude more likely to appear in print than null results. Arguably more realistic assumptions can easily produce magnitudes of selection on significance around 100x"
#science #statistics #bias

Following criticism of a method for estimating the impact of missing trader fraud on import/export data, we got an EU grant to study all the potential data sources and methods.
Six strands of work over 6 months.
We concluded the current method was the best available.
The UK press, who'd happily piled on the data criticism, attended the press conference and basically said telling them that was the best method was a waste of their time.
Eurostat published the paper and the UK media ignored it.

https://retractionwatch.us12.list-manage.com/track/click?u=4f35c1f2e9acc58eee0811e78&id=17ffcb0031&e=7fe6650ac7

#NullResults

OSF

#Kolumne
„Ich habe nicht versagt – ich habe nur 10.000 Wege gefunden, die nicht funktionieren.“
Schon T. A. Edison wusste: Auch negative Ergebnisse bringen Erkenntnis. Warum aber tun sich Forschende bis heute schwer damit, Nullresultate zu veröffentlichen? Eine neue Umfrage von Springer Nature zeigt: Zwischen Einsicht und Handeln klafft eine erstaunlich große Lücke. Mehr dazu von Ralf Neumann: https://www.laborjournal.de/editorials/3396.php

#Laborjournal #Lifesciences #Forschung #NullResults #PublicationBias

🌍📚 #Munin2025 Abstract Spotlight 📚🌍
Samuel Winthrop on "The positivity trap" 🔍 Why publishing null results matters, survey insights & steps to boost transparency in research. 💡✨

🔗 https://doi.org/10.7557/5.8236
#OpenScience #NullResults #ResearchIntegrity

The positivity trap: is a bias against null results in research literature holding back science? | Septentrio Conference Series

Great job by Lucia Coll, a student working in my group. Lucia's #scicomm video about why #nullresults are interesting made it to the jury mentions of the national science communication competition #fastforwardscience in Germany. Check out the video here! https://www.instagram.com/p/DIgx4cNtWo0/?hl=en

more about the competition here: https://fastforwardscience.de/en/2025/08/young-scientist-award-short-shortlist/

Do reflection tests impact philosophical thinking?

I found 10 #nullResults of taking reflection tests before (vs. after) making decisions about #philosophy: https://doi.org/10.31234/osf.io/y8sdm_v5

Since Analysis accepted that paper, #xPhi got another null #replication: https://doi.org/10.1111/mila.12558

PS: I'm sympathetic to those short on time or caught up in job changes. But those pointing to #nullresults didn't understand the point of #preregistration. Those whose submissions were rejected by journals could still share the results as #preprints.
Bluesky

Bluesky Social

PS: I'm sympathetic to those short on time or caught up in job changes. But those pointing to #nullresults didn't understand the point of #preregistration.

Those whose submissions were rejected by journals could still share the results as #preprints.

#statstab #238 Bridging null hypothesis testing and estimation

Thoughts: An overview of the ways you can claim "no effect" under a bayesian framework.

#bayesian #bayesfactors #nullresults #noeffect #equivalencetests #equivalence #jasp #r

https://osf.io/preprints/psyarxiv/c7b45

OSF

#statstab #236 Using Bayes to get the most out of non-significant results

Thoughts: A bayesian way to investigate "no effect": Bayes Factors. Cool guide on how to think about priors (post hoc even).

#priors #bayesfactors #nullresults #equivalence #nhbt

https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2014.00781/full

Frontiers | Using Bayes to get the most out of non-significant results

No scientific conclusion follows automatically from a statistically non-significant result, yet people routinely use non-significant results to guide conclus...

Frontiers