"Most of the statistical tests … are severely Underpowered … implies that the majority of null hypothesis significance tests we run should yield #nullResults. Yet, the published literature tells a very different story.
… central mechanism behind this gap is selection on #significance, a filter that makes statistically significant estimates more likely to appear in print than null results.
… published estimates are afflicted by a “winner’s curse” that biases them away from zero, researchers commit more Type I error, metaanalyses become unreliable, and published studies are less likely to be replicable. When null results remain hidden in the file drawer, researchers may waste time and money studying the same ineffective interventions over and over
… share of pure null results is extremely small: fewer than 2% of abstracts report only null findings. By contrast, over 90% of articles that rely on statistical methods prominently claim to reject at least one null hypothesis.
… estimates that are statistically distinguishable from zero must be at least one order of magnitude more likely to appear in print than null results. Arguably more realistic assumptions can easily produce magnitudes of selection on significance around 100x"
#science #statistics #bias




