This investigation of Ranga Dias' superconductivity publications is remarkable for multiple reasons.

https://www.nature.com/articles/d41586-024-00716-2

Nobody comes out of it well, but Nature are much more transparent about the editorial process than I can ever remember. (It's a little unclear if that was spontaneous, but, if not, the frequently claimed independence of Nature News came good.)

Thread. /1

Superconductivity scandal: the inside story of deception in a rising star’s physics lab

Ranga Dias claimed to have discovered the first room-temperature superconductors, but the work was later retracted. An investigation by Nature’s news team reveals new details about what happened — and how institutions missed red flags.

The "research" is at times risible. Key experimental results appeared suddenly in a manuscript version upon which lab members were given a couple of hours to comment before submission to Nature.

"When the students asked Dias about the stunning new data, they say, he told them he had taken all the resistance and magnetic-susceptibility data before coming to Rochester."

Just nonchalantly sitting on proof of room-temperature superconductivity for a few years, as one does. /2

The students are definitely not the villains of the piece, but if they "did not suspect misconduct at the time" and "trusted their adviser", they seem somewhat naive under the circumstances. /3

For the first paper, Nature engaged three referees and there were three rounds of review. One referee was strongly positive, the other two did not support publication. Nature went ahead anyway.

I can't think of a previous black on white example where Nature have admitted allowing impact to override quality, although that's always been the tacit implication of their editorial policy. And this is exactly the result they risk with that policy. /4

@BorisBarbour

For as long as I can remember, they've always made it quite explicit, that their editors reign supreme and reviewers only advise them - and that this goes in bnoth directions.

In the words of now infamous Declan Butler, "peer-review light": the non-peers are making the main decisions and the peers are relegated to the back-seats.

@brembs @BorisBarbour "For as long as I can remember, they've always made it quite explicit, that their editors reign supreme and reviewers only advise them - and that this goes in bnoth directions."

Isn't that how journals started, and how they're supposed to function? The role of reviewers is to advise the editor, not be the editor and make decisions for the journal.

If editors aren't supposed to make their own judgement calls, why have trained scientist experts be editors at all?

@brembs @BorisBarbour Sure, this sometimes gets you the Benveniste affairs of the world... That's what's happened here right? But that's built in to the system, which relies on good science winning out in the end. And it did that here also. So is there really a problem?

Nature's a private company. They're allowed to screw up, and we're allowed to judge the sum of their work and decide if their error rate is *unacceptably* high. Doing peer review is voluntary, we vote with our feet.

@MarkHanson @brembs

Does the policy pass the honesty test: would they publish if they had to post the referee reports alongside, with only a single positive one? I'm guessing no.

I think Rochester and the funders come out of this affair far worse than Nature. But there are plenty of things Nature can improve upon:
- do more to resolve scientific issues between referees before accepting
- bear in mind track records for quality/integrity
- contact all authors in an investigation

@BorisBarbour 100% agree.

Re: "dangerous" - to who?

What sort of error rate should journals be allowed? Shouldn't we just let Nature accept the egg on their face and we all move on?

I guess if I summed my stance: science does *not* have a no-tolerance policy on being wrong. The issue here stems from giving undue weight to being 'published' as being 'true'.

This isn't some failure of the scientific method. As emphasized here, the scientific method doesn't end at publication.

@MarkHanson @BorisBarbour

Key issue is the system itself: publish a paper and pretend it's the ultimate truth on the matter. A system shift is needed to negate that assumption on published papers, and to instead more humbly publish results as the latest take on the matter, correct or not but hopefully constructive and insightful. A first step to that end is to stop using papers as tokens of academic currency weighted by the publication venue and for any evaluators to start reading the papers.

#ScientificPublishing

@albertcardona @MarkHanson @BorisBarbour

Precisely, Albert!

Some of us are old enough to remember the old tagline of Nature "the world's best science and medicine" - pretty much the opposite of what the data say (which may be one reason why they stopped using it) 🤣

I'd guess at this point, 30 years down the debate, most people with some competence probably agree that the system is FUBAR, like Albert says. That kind of concensus has been emerging on the last decade or so.

@albertcardona @MarkHanson @BorisBarbour

The consensus that we eventuelly will need to replace academic journals has only been emerging in the last 2-3 years and mostly here in Europe, more slowly elsewhere.

@brembs @albertcardona @MarkHanson @BorisBarbour
@neuralreckoning

I think a lot of this recognition that we will need to replace academic journals soon has been the recognition that bioRxiv, psyRxiv, and medRxiv have not been the disasters many thought they would be*. I think a lot of people thought that peer review was critical to the success of the enterprise, and therefore we had to put up with the journals because we needed the peer review gatekeeping. However, it has become clear that (within field), labs can mostly do their own peer review.

It is not clear what we can do about science outside field. As a scientist how can I know whether to believe something outside my immediate field. And how should we control what journalists, politicians, and clinicians trust, given that they do not have the training to do their own "in-lab" peer review.

Nevertheless, importantly, now that we have preprint servers and can compare pre- and post-peer review, it is pretty clear that peer review isn't doing much, which gives us the ability to say that the costs (excessive publisher profits, reviewer time costs, etc) are not worth the gains.

* Yes, I know, arXiv has been around for many many years. But people somehow thought biology, psychology, and the other non-physics fields were different. ¯\_(ツ)_/¯

@adredish @brembs @albertcardona @MarkHanson @BorisBarbour the greatest trick the publishers ever pulled was convincing the world that peer review was necessary. 😉
@neuralreckoning serious question from someone outside this debate: what do you think of editors? I heard your comments on the Brain Inspired podcast -very interesting

@carl24k I think they're almost universally very public spirited people with a strong sense of duty, willing to do a thankless task to make science better.

With that said - and this is the bit that will get me in trouble - I think they're wrong that it makes science better.

I see two possible roles of an academic editor, and for both of them the journal structure with pre-publication peer review is the wrong way to achieve those ends, and leads to systematic biases that distort science (I've written about this in the articles that I'll post at the end).

The first role of an editor is to find and ideally fix errors. Scientists all know in practice that this doesn't work, and the evidence bears that out. Most errors are not picked up by pre-publication peer review, and post-publication ongoing peer review does a much better job. We should bite the bullet and switch to that immediately.

The second role is to curate good science. I want to divide this role into two. The first part of the role is picking work that would be of interest to a particular community. This is great, but doesn't have to be - and shouldn't be - tied to publication. I love it when individuals or groups come up with curated weekly or monthly reading lists of papers/preprints for example.

The second part of the role is the problematic one - it's selecting work for publication based on predictions of its likely impact. I think this is an impossible task. Or rather, it's impossible to predict what will have meaningful impact. It's probably rather easy to predict what will get well cited. I'd guess a fairly simple machine learning model could probably do as well or better than most of us just using word frequencies in the abstract. But predicting what will have meaningful lasting impact on a field is - to me - obviously impossible. And pretending it's possible leads to bias. If you have judgements that can be factorised as signal + bias + noise, and there is no reliable signal, then your judgement is either random if noise dominates (the best case) or bias if not (the worst case). If your decisions are consistent, this is almost certainly just an indication that they are biased, not that you are picking up on signal.

So to get back to the question. I think editors are trying to do the right thing, but inadvertently they are just reinforcing structural biases that are present throughout science.

And if you find that sort of thing interesting, more on my science reform blog:

https://thesamovar.github.io/zavarka/

Zavarka

Thoughts on reforming science, publishing and academia.

Zavarka