Several of us overly online biologists spent years quietly doing an experiment on Twitter, trying to find out if tweeting about new studies from a set of mid-range journals caused an increase in later citations, compared to set of untweeted control articles.

Turns out we had no noticeable effect; the tweeted papers were cited at the same rate as the control set.

Our paper, headed by Trevor Branch, was published today in PLOS One:

#SciComm #Twitter #X #Science

https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0292201

Controlled experiment finds no detectable citation bump from Twitter promotion

Multiple studies across a variety of scientific disciplines have shown that the number of times that a paper is shared on Twitter (now called X) is correlated with the number of citations that paper receives. However, these studies were not designed to answer whether tweeting about scientific papers causes an increase in citations, or whether they were simply highlighting that some papers have higher relevance, importance or quality and are therefore both tweeted about more and cited more. The authors of this study are leading science communicators on Twitter from several life science disciplines, with substantially higher follower counts than the average scientist, making us uniquely placed to address this question. We conducted a three-year-long controlled experiment, randomly selecting five articles published in the same month and journal, and randomly tweeting one while retaining the others as controls. This process was repeated for 10 articles from each of 11 journals, recording Altmetric scores, number of tweets, and citation counts before and after tweeting. Randomization tests revealed that tweeted articles were downloaded 2.6–3.9 times more often than controls immediately after tweeting, and retained significantly higher Altmetric scores (+81%) and number of tweets (+105%) three years after tweeting. However, while some tweeted papers were cited more than their respective control papers published in the same journal and month, the overall increase in citation counts after three years (+7% for Web of Science and +12% for Google Scholar) was not statistically significant (p > 0.15). Therefore while discussing science on social media has many professional and societal benefits (and has been a lot of fun), increasing the citation rate of a scientist’s papers is likely not among them.

@alexwild

Nice work! This takes me back to speculative musings on the time domain behaviour of these interventions (http://hdl.handle.net/20.500.11937/32897, your Figure 2 made me think of my Figure 4)

You've really captured the immediacy of the viewing effect and I'm wondering whether a citation effect might be clearer if analysed in a more time dependent way rather than at a three year census point...

...but you've given us the necessary information to make that analysis possible, which is fabulous! (whether I have the time is another question)

The other question I've got is whether the citations might show greater diversity (reaching a wider range of scholars) because they are coming through a set of followers that might have wider geographic or disciplinary diversity. And we can test that as well! (same caveats apply...)

The road less travelled: optimising for the unknown and unexpected impacts of research

@alexwild @cameronneylon
Yes, this makes sense…
There are direct citations (citing X because I’m replicating X / extending X by taking the next step / assimilating X into a theory) and there are more indirect citations (citing X because it’s interesting & cool & maybe it can link to these data Y). Social media might be expected to pull more of the latter, but evidently not noticeably so. A deeper dive into the non-sig citation gain might examine this diversity
@johnntowse @alexwild The other point is that using a bigger citation data source might give a different result if there is a real effect but the effect size isn't huge and the statistical power not quite there. That's another thing that would be relatively easy to test with OpenCitations and the DOIs (I'll put it on the list...)
@cameronneylon @alexwild
The issue of power is discussed in the paper of course, but I am sympathetic to the argument there the effect size is not that impactful (even if it exists at the population level, it’s not making much difference for the individuals who do or don’t tweet about their papers)

@johnntowse @alexwild

Agreed, my counter would be that in many of these cases the distribution of effects amongst individual outputs is wild, so effect sizes may look small on average but the effect when it happens can be quite large. And I would always have expected any effect to be large, but for a subset of papers.

Obviously randomised control trials like this to smear some of those effects out by design.

I feel that a Hidden Markov Model or time domain analysis would ultimately help in understanding the underlying pathways. But I also get that those approaches tell us about probabilistic associations, not causality - which is where the approach here is strong

And all of that said your main point is well supported - that for any specific paper, being tweeted about doesn't (didn't?) lead to significantly more citations on average

@alexwild @cameronneylon
Absolutely, these are really interesting questions to think about in response to a clever paper. (And in the meantime those who stay away from social media / certain social media can modulate their FOMO!)