A colleague of mine in psychology recently shared with my the idea of "the troubling trio", characterized by: (1) low statistical power, (2) a surprising result, and (3) a p value only slightly less than 0.05. I hadn't heard this particular phrase before, so I'm sharing it. More people should know why the troubling trio is something to be troubled by, and should be on the lookout for this combination in their work. #datascience #stats #quant https://doi.org/10.1177/0956797615616374 (by @dstephenlindsay)
@lakens @dstephenlindsay Thanks for letting me know. I'm a real amateur when it comes finding Mastodon handles! ( I suspect a lot of us are.)
@humanitiesData @dstephenlindsay I just thought he'd be happy to see your enthusiam - it's also fine to post papers without tagging authors, but me being a fan of this paper, I was excited to tag him for you :)
@lakens @dstephenlindsay Since it's Mastodon, I can also the edit the OP, and I did!
@humanitiesData Damn, we just wrote a commentary to a study where "troubling trio" would have been a perfect description. Authors nonetheless concluded that this is "convincing evidence"
@ploederl So I take it I'm not the only one who hadn't heard this phrase before. Whenever I come across something like this, I feel like I should already know about it!

@humanitiesData

It is true that these issues exist. One way to "fight" them is to consider a study as providing evidence or description, rather than a decisive answer to a problem. This "transparent thresholding" reduces the dependence on arbitrary threshold values, and seems a more realistic way to present results of a single study---not single study can provide The Answer on a hypothesis, so why pretend that it does?

@afni_pt I the emphasis of "evidence/description" vs. "decisive answer" is a bit of a red herring. There's an underlying continuum of confidence in things behind every "the Answer" type claim, and in an evidence/description mode, lots of things that are invoked as evidence do not in fact support the stated claim/belief, so how does that get us away from the problem? Maybe what I think is closer to what you think and I'm misunderstanding. Thanks for those refs as well! They look great!
@humanitiesData : Many studies are presented in "the answer mode" (= these are the significant results, and nothing else, we have decided), even through indeed they are actually evidence on the continuum of confidence (nice phrase!). It is that disconnect that seems unnecessary: why not *present* in evidence mode from the start, since that is more accurate, and it would be much richer and helpful for the field? Why pretend in figures that these are The Only regions that matter, for example?
@afni_pt that's a good clarification, and I agree with what you're saying here!