New blogpost π¨ : What do you do if you found a significant result, but your study was underpowered? How reliable is your finding? I discuss Type M and Type S error
https://mzstats.blogspot.com/2023/02/what-not-to-do-with-non-null-results.html
#statistics #frequentist #NHST #pvalue #sensitivityanalysis #falsepositiverisk #rstats

What NOT to do with NON-βnullβ results β Part III: Underpowered study, but significant result
underpowered, null, statistics
#FalsePositiveRisk in #Medicine
We provide an empirical test of Ioannidis's prediction and find the false discovery risk is well below 50%. Now as citation friendly preprint.
https://arxiv.org/abs/2302.00774
Estimating the false discovery risk of (randomized) clinical trials in medical journals based on published p-values
The influential claim that most published results are false raised concerns
about the trustworthiness and integrity of science. Since then, there have been
numerous attempts to examine the rate of false-positive results that have
failed to settle this question empirically. Here we propose a new way to
estimate the false positive risk and apply the method to the results of
(randomized) clinical trials in top medical journals. Contrary to claims that
most published results are false, we find that the traditional significance
criterion of $Ξ±= .05$ produces a false positive risk of 13%. Adjusting
$Ξ±$ to .01 lowers the false positive risk to less than 5%. However, our
method does provide clear evidence of publication bias that leads to inflated
effect size estimates. These results provide a solid empirical foundation for
evaluations of the trustworthiness of medical research.
arXiv.org