Remember that abysmal attempt at creating a fake paper detector that #Science magazine trumpeted? The one that just looked to see if you used your institutional email address, had international collaborators, and were affiliated with a hospital?

The one that instantiated the authors biases and then they turned around and used as evidence for those biases?

Science has just published the letter that Brandon Ogbunugafor and I wrote in response.

Kudos to them for that...

https://www.science.org/doi/10.1126/science.adi7104

But their "editor's note" published alongside our letter is, not to put too fine a point on it, complete bullshit.

"Far from heralding or sensationalizing the tool, we presented it as a rough indicator of a real problem."

It’s not a rough indicator; their own data show that it entirely fails. More importantly, a rough indicator with racist consequences is far worse than no indicator at all, and the article neither notes these racist consequences nor this basic fact.

https://www.science.org/doi/10.1126/science.adj3681

@ct_bergstrom the institutional email bit is scary. It also hits articles based on thesis work of recent graduates.

Finding a research position after graduation takes time, and if you publish something in the mean time a personal email is all that's left. Same applies if you decide to work in industry.

My first paper would have been impacted by this thing .

@ct_bergstrom
Their response suggests that they’ve forgotten the first Law of Holes …
@johnntowse I laughed aloud. This is exactly what happened.

@johnntowse @ct_bergstrom
😂
TIL I learned about The first law of holes (the name not the concept)

First I thought it would be something like "don't be an A hole". But that's probably the second law of holes 😜

https://en.m.wikipedia.org/wiki/Law_of_holes

Law of holes - Wikipedia

@ct_bergstrom how does which email you used for a contact address become more important than the content of the paper i.e. your intellectual ability to think and express it in writing in a scientific paper?

I'm curious what happens to article commentaries if the eLife model gets popular.

If official journal assessment is that methodology is incomplete or inadequate, ppl don't have to write critical commentaries on crappy papers anymore I guess?

On the other hand, there would likely be a wave of new commentaries that demand upgrading/downgrading the official assessment.

@ct_bergstrom Indeed; I'm reminded of Andrew Sullivan & his "I'm just trying to promote a healthy intellectual discourse" when called out for his amplification of that racist bell curve nonsense
@ct_bergstrom Gosh, it is frustrating how obtuse this response is. Also “…the limitations of tools for detecting fake papers such as…” implies a serious discussion of state-of-the-art methods for “detecting fake papers”, when this specific ‘tool’ is at best completely wacky (and at worst quite harmful).

@ct_bergstrom

Ugh! What’s the name of that fallacious rhetorical sleight-of-hand they used there — false binary? straw man? They pretend that there is no semantic distance between “not perfect” and “not adequate”, and that a parenthetical hand-wave is equivalent to a full-throated skeptical explanation of an exaggerated claim.

(but of course “complete bullshit” is a perfectly adequate name for it too)