This week, Science published a stunningly irresponsible news story entitled "Fake scientific papers are alarmingly common" and claiming that upward of 30% of the scientific literature is fake.

https://www.science.org/content/article/fake-scientific-papers-are-alarmingly-common

Below, the first two paragraphs of the story.

Headline and intro notwithstanding, the story itself later notes that the detector doesn't actually work and flags nearly half of real papers as fake. Does the reporter just not understand that?

h/t @Hoch

Fake scientific papers are alarmingly common

But new tools show promise in tackling growing symptom of academia’s “publish or perish” culture

The numbers from this story are based on a laughable "fake paper detector" that literally consists of the following ONLY. Do the authors:

1) use private (non-institutional) email addresses and/or have a hospital affiliation,

and

2) have no international coauthors.

That's it.

If these criteria are met, the paper is deemed a "potential red-flag fake publication" and counted toward that 30% tally.

Spin notwithstanding, the technical details within preprint itself make it abundantly clear that the method doesn't work.

In a "juiced" test set with as many fake papers as real ones, the indicators that they use have a sensitivity of 86% and a false alarm rate of 44%.

Yes, they flag 44% of the known real papers as fake.

That's not a detector, it's a coinflip.

This should be a profound embarrassment to everyone involved with the preprint and Science story alike.

https://www.medrxiv.org/content/10.1101/2023.05.06.23289563v1.full.pdf

To test their indicator, the authors conjecture that a valid indicator should meet three criteria based on a questionnaire sent to authors:

(i) Authors of fake publications are reluctant to provide critical information as revealed by their response – or non-response – to the questionnaire by the editor,

(ii) the number of fake publications increases steadily over time, and

(iii) journals with a low to medium impact factor are most affected.

There's a huge problem here.

If non US/EU authors are more likely to use non-institutional email addresses, the detector will pick up these authors disproportionally.

And indeed we expect non US/EU authors to

i) have lower response rates, due to language issues not to mention (deserved, apparently!) distrust

ii) make up an increasing fraction of publication share

iii) publish at higher rates in low and mid impact journals

So all three test hypotheses fail to distinguish between fake papers and non US/EU authorship.

And that's the worst part of it.

While I don't want to imply anything about the motivations of the authors, *their paper has racist consequences.*

Their paper implements a detector that they themselves show doesn't work, and that we have every reason to expect disproportionally flags papers from Asia and the Global South—and then concludes that these area contribute the most fake papers.

Figure 3 is a disgrace.

I'm astonished that Science credulously boosted this rubbish, and even more surprised that Gerd Gigerenzer put his name on it.

/fin

Further thoughts in the continued thread here: https://fediscience.org/@ct_bergstrom/110359692279086139
Carl T. Bergstrom (@[email protected])

Attached: 1 image Continued: it's striking that the authors didn't even use conventional machine learning procedures to develop their classifier. Rather, they chose features that made sense to them as indicators. These were not even indicators of fake papers, but rather indicators of non-response to a survey —which of course is a very different thing than authorship of a fake paper.

FediScience.org
@ct_bergstrom One thing I still don’t get: How can a detector with 44% false positives end up reporting some 30% hits? Am I missing something or is the dissonance really just that profound?
@jpelckolsen I suppose the 44% was on the small test set?
@ct_bergstrom shouldn’t that still raise a flag? Say I make a “human-or-cat detector” that flags an individual as a cat if it weighs less than 30 kg. With my household as the test set that would give a sensitivity of 100% and a specificity of 50% (like the detector in the article) meaning I expect to flag half of all humans as cats. If I then apply that detector to my workplace I’d find 0% cats, but how could that be? I expect at least 50% of my colleagues to be false positives
@jpelckolsen I agree it's inconsistent and I was guessing there's some difference between test sets. But who knows....

@ct_bergstrom I suspect since Sabel works in *clinical* neuropsychology, if he would only be using this algorithm on papers in his own field, so hospital affiliation would be a must rather than a should?

Rest of this sus af.

@ct_bergstrom

A colleague, who should know better when it comes to data analysis, circulated this article across our internal science network.

I would direct them to your thread above, but work blocks much of the Fediverse (but not Twitter).

This and other misuse of statistics in practice somewhat annoy me, as some directly impacts my work (and I'm in no way a statistician, nor scientist).

@IceNine
> I would direct them to your thread above, but work blocks much of the Fediverse (but not Twitter)

WTF?!? Have you raised a ruckus with them about how ridiculous that is?

@ct_bergstrom

@strypey

It's probably a good thing, largely keeps me off during work. 😆

@IceNine
Not off Titter...

@strypey

Yeh, but I don't use that account any more. Was unamused to see the likes of who got unbanned, and what is now considered acceptable.

@IceNine
Good for you :) I only ever used my Titter account as a sockpuppet, echoing my public posts here.
@IceNine @strypey Pretty much. No desire to supply free content to a network that caters to convicted seditionists and wannabe Oswald Moselys.
Oliver D. Reithmaier (@[email protected])

If your tool produces high rates of false positives, and other tools produce the same results as yours, the idea to get should not be that your tool is "as good as any". It's that your tool and its concept fucking suck. What a disgrace of an article & what a waste of money. https://www.science.org/content/article/fake-scientific-papers-are-alarmingly-common

Infosec Exchange
@ct_bergstrom And the headline implies that neuroscience is the totality of science. Many geologists and and astronomers and ecologists will be surprised to learn that they are not doing science.