Remember that abysmal attempt at creating a fake paper detector that #Science magazine trumpeted? The one that just looked to see if you used your institutional email address, had international collaborators, and were affiliated with a hospital?

The one that instantiated the authors biases and then they turned around and used as evidence for those biases?

Science has just published the letter that Brandon Ogbunugafor and I wrote in response.

Kudos to them for that...

https://www.science.org/doi/10.1126/science.adi7104

But their "editor's note" published alongside our letter is, not to put too fine a point on it, complete bullshit.

"Far from heralding or sensationalizing the tool, we presented it as a rough indicator of a real problem."

It’s not a rough indicator; their own data show that it entirely fails. More importantly, a rough indicator with racist consequences is far worse than no indicator at all, and the article neither notes these racist consequences nor this basic fact.

https://www.science.org/doi/10.1126/science.adj3681

I'm curious what happens to article commentaries if the eLife model gets popular.

If official journal assessment is that methodology is incomplete or inadequate, ppl don't have to write critical commentaries on crappy papers anymore I guess?

On the other hand, there would likely be a wave of new commentaries that demand upgrading/downgrading the official assessment.