Controlled experiment finds no detectable citation bump from Twitter promotion https://www.biorxiv.org/content/10.1101/2023.09.17.558161v1 Good that article choice and tweet timing were randomized. In observational setting, it is likely that people tweet about the most interesting papers, which are also likely to perform better on attention metrics (citations, Altmetrics etc.).
- Altmetrics and tweets are boosted.
- Citations higher after three years, but w/o statistical significance.
Three thoughts on this: 1/
@academicchatter
@academicchatter
1) There was only one tweet per selected paper. It is easy to miss a single tweet on a paper. It think that when authors post about their own papers, they do it several times for this reason, which is likely to boost metrics.
2) The paper focuses much on statistical significance, which is not achieved (.05) for citations. A look at the results shows that the effect on citations is likely to be small, very small, for a three-year period. This could have been emphasized more.
2/
@academicchatter
3) A power analysis is presented in the paper because of the possibility that the sample size is too small. It is an ex post power analysis that takes the observed effect as the true effect. This is not meaningful, as is explained here by @lakens https://lakens.github.io/statistical_inferences/08-samplesizejustification.html#sec-posthocpower (I think @richarddmorey has also written about it, can'T find it right now). Since it is a RCT, ex ante power analysis would have been possible 3/
Improving Your Statistical Inferences - 8  Sample Size Justification

This open educational resource contains information to improve statistical inferences, design better experiments, and report scientific research more transparently.

@ingorohlfing @academicchatter @lakens I've written about power here (https://github.com/richarddmorey/psychology_resolution), here (https://towardsdatascience.com/why-you-shouldnt-say-this-study-is-underpowered-627f002ddf35) and here (https://richarddmorey.medium.com/power-and-precision-47f644ddea5e) but I generally try to take the tack of explaining what power is rather than what it is not. Post hoc power analyses 1) confuse of parameter for statistic, and 2) misunderstand the whole *point* of power/sensitivity as an idea (which is a shame, because it is a good idea: power is a function, not a single probability). A power analysis is an examination of the design and test, so it doesn't matter whether you do it before or after: it would come out the same.
GitHub - richarddmorey/psychology_resolution: Paper and code for Morey and Lakens (in prep.)

Paper and code for Morey and Lakens (in prep.). Contribute to richarddmorey/psychology_resolution development by creating an account on GitHub.

GitHub