This investigation of Ranga Dias' superconductivity publications is remarkable for multiple reasons.

https://www.nature.com/articles/d41586-024-00716-2

Nobody comes out of it well, but Nature are much more transparent about the editorial process than I can ever remember. (It's a little unclear if that was spontaneous, but, if not, the frequently claimed independence of Nature News came good.)

Thread. /1

Superconductivity scandal: the inside story of deception in a rising star’s physics lab

Ranga Dias claimed to have discovered the first room-temperature superconductors, but the work was later retracted. An investigation by Nature’s news team reveals new details about what happened — and how institutions missed red flags.

The "research" is at times risible. Key experimental results appeared suddenly in a manuscript version upon which lab members were given a couple of hours to comment before submission to Nature.

"When the students asked Dias about the stunning new data, they say, he told them he had taken all the resistance and magnetic-susceptibility data before coming to Rochester."

Just nonchalantly sitting on proof of room-temperature superconductivity for a few years, as one does. /2

The students are definitely not the villains of the piece, but if they "did not suspect misconduct at the time" and "trusted their adviser", they seem somewhat naive under the circumstances. /3

For the first paper, Nature engaged three referees and there were three rounds of review. One referee was strongly positive, the other two did not support publication. Nature went ahead anyway.

I can't think of a previous black on white example where Nature have admitted allowing impact to override quality, although that's always been the tacit implication of their editorial policy. And this is exactly the result they risk with that policy. /4

@BorisBarbour

For as long as I can remember, they've always made it quite explicit, that their editors reign supreme and reviewers only advise them - and that this goes in bnoth directions.

In the words of now infamous Declan Butler, "peer-review light": the non-peers are making the main decisions and the peers are relegated to the back-seats.

@brembs @BorisBarbour "For as long as I can remember, they've always made it quite explicit, that their editors reign supreme and reviewers only advise them - and that this goes in bnoth directions."

Isn't that how journals started, and how they're supposed to function? The role of reviewers is to advise the editor, not be the editor and make decisions for the journal.

If editors aren't supposed to make their own judgement calls, why have trained scientist experts be editors at all?

@brembs @BorisBarbour Sure, this sometimes gets you the Benveniste affairs of the world... That's what's happened here right? But that's built in to the system, which relies on good science winning out in the end. And it did that here also. So is there really a problem?

Nature's a private company. They're allowed to screw up, and we're allowed to judge the sum of their work and decide if their error rate is *unacceptably* high. Doing peer review is voluntary, we vote with our feet.

@MarkHanson @brembs

Does the policy pass the honesty test: would they publish if they had to post the referee reports alongside, with only a single positive one? I'm guessing no.

I think Rochester and the funders come out of this affair far worse than Nature. But there are plenty of things Nature can improve upon:
- do more to resolve scientific issues between referees before accepting
- bear in mind track records for quality/integrity
- contact all authors in an investigation

@BorisBarbour 100% agree.

Re: "dangerous" - to who?

What sort of error rate should journals be allowed? Shouldn't we just let Nature accept the egg on their face and we all move on?

I guess if I summed my stance: science does *not* have a no-tolerance policy on being wrong. The issue here stems from giving undue weight to being 'published' as being 'true'.

This isn't some failure of the scientific method. As emphasized here, the scientific method doesn't end at publication.

@MarkHanson @BorisBarbour

Key issue is the system itself: publish a paper and pretend it's the ultimate truth on the matter. A system shift is needed to negate that assumption on published papers, and to instead more humbly publish results as the latest take on the matter, correct or not but hopefully constructive and insightful. A first step to that end is to stop using papers as tokens of academic currency weighted by the publication venue and for any evaluators to start reading the papers.

#ScientificPublishing

@albertcardona @MarkHanson @BorisBarbour

Precisely, Albert!

Some of us are old enough to remember the old tagline of Nature "the world's best science and medicine" - pretty much the opposite of what the data say (which may be one reason why they stopped using it) 🤣

I'd guess at this point, 30 years down the debate, most people with some competence probably agree that the system is FUBAR, like Albert says. That kind of concensus has been emerging on the last decade or so.

@albertcardona @MarkHanson @BorisBarbour

The consensus that we eventuelly will need to replace academic journals has only been emerging in the last 2-3 years and mostly here in Europe, more slowly elsewhere.

@brembs @albertcardona @MarkHanson @BorisBarbour
@neuralreckoning

I think a lot of this recognition that we will need to replace academic journals soon has been the recognition that bioRxiv, psyRxiv, and medRxiv have not been the disasters many thought they would be*. I think a lot of people thought that peer review was critical to the success of the enterprise, and therefore we had to put up with the journals because we needed the peer review gatekeeping. However, it has become clear that (within field), labs can mostly do their own peer review.

It is not clear what we can do about science outside field. As a scientist how can I know whether to believe something outside my immediate field. And how should we control what journalists, politicians, and clinicians trust, given that they do not have the training to do their own "in-lab" peer review.

Nevertheless, importantly, now that we have preprint servers and can compare pre- and post-peer review, it is pretty clear that peer review isn't doing much, which gives us the ability to say that the costs (excessive publisher profits, reviewer time costs, etc) are not worth the gains.

* Yes, I know, arXiv has been around for many many years. But people somehow thought biology, psychology, and the other non-physics fields were different. ¯\_(ツ)_/¯

@adredish @brembs @albertcardona @MarkHanson @BorisBarbour the greatest trick the publishers ever pulled was convincing the world that peer review was necessary. 😉
@neuralreckoning serious question from someone outside this debate: what do you think of editors? I heard your comments on the Brain Inspired podcast -very interesting

@carl24k I think they're almost universally very public spirited people with a strong sense of duty, willing to do a thankless task to make science better.

With that said - and this is the bit that will get me in trouble - I think they're wrong that it makes science better.

I see two possible roles of an academic editor, and for both of them the journal structure with pre-publication peer review is the wrong way to achieve those ends, and leads to systematic biases that distort science (I've written about this in the articles that I'll post at the end).

The first role of an editor is to find and ideally fix errors. Scientists all know in practice that this doesn't work, and the evidence bears that out. Most errors are not picked up by pre-publication peer review, and post-publication ongoing peer review does a much better job. We should bite the bullet and switch to that immediately.

The second role is to curate good science. I want to divide this role into two. The first part of the role is picking work that would be of interest to a particular community. This is great, but doesn't have to be - and shouldn't be - tied to publication. I love it when individuals or groups come up with curated weekly or monthly reading lists of papers/preprints for example.

The second part of the role is the problematic one - it's selecting work for publication based on predictions of its likely impact. I think this is an impossible task. Or rather, it's impossible to predict what will have meaningful impact. It's probably rather easy to predict what will get well cited. I'd guess a fairly simple machine learning model could probably do as well or better than most of us just using word frequencies in the abstract. But predicting what will have meaningful lasting impact on a field is - to me - obviously impossible. And pretending it's possible leads to bias. If you have judgements that can be factorised as signal + bias + noise, and there is no reliable signal, then your judgement is either random if noise dominates (the best case) or bias if not (the worst case). If your decisions are consistent, this is almost certainly just an indication that they are biased, not that you are picking up on signal.

So to get back to the question. I think editors are trying to do the right thing, but inadvertently they are just reinforcing structural biases that are present throughout science.

And if you find that sort of thing interesting, more on my science reform blog:

https://thesamovar.github.io/zavarka/

Zavarka

Thoughts on reforming science, publishing and academia.

Zavarka

@adredish @albertcardona @MarkHanson @BorisBarbour @neuralreckoning

Yes, I completely agree, this is a major point. Loved the '*' too, spot on! 😆

I think peer review done in a clever, sophisticated way can be usefull, e.g., when there is big societal interest/impact or if peers decide that there is some discovery worth telling people outside of the community or if there is specific outside demand, or some such.

@adredish @albertcardona @MarkHanson @BorisBarbour @neuralreckoning

If designed in a visually clever way, what we now still call 'preprints' would be easily discernable from more vetted, reproduced, reviewed, cited, open science, whatever material.

@brembs @albertcardona @MarkHanson @BorisBarbour @neuralreckoning

I think the key is that we need to separate within-field from societal impact.

I think the key is that everyone has the timeline of science wrong. I have no idea if a given paper is right or wrong (and am no longer convinced that peer review will catch errors), but over the course of decades, science gets things really right. The science we have been working on for 50 years is really solid.

Personally, if I could,* I would make it that science was unavailable to the public for some time (1 year? 5 years? 10 years? - I could make a good case for a decade) and was only available to be discussed in field until then. Basically, the idea would be that "peer review" comes from the additional experiments and discussion and "revision" that is the normal scientific process.

* Of course, this is never going to happen.

@adredish @albertcardona @MarkHanson @BorisBarbour @neuralreckoning

Yes, comlpetely agree! I think it may be second-best to just deprecate the early work, so nobody would want to see it. 😆

"Just talk among scientists, to early to tell".

This will not be perfect, but neither would a wall around science be. I think such a depreciation is both feasible and actually warranted?

@adredish @brembs @MarkHanson @BorisBarbour @neuralreckoning

Great point on the changing perception on preprints in the biological sciences. For me a published paper is always like a preprint – I read it with an equal amount of scrutiny – so I haven't noticed any difference with before and after the rise of preprints.

On the "outside field" point: I reckon this is an issue already now and has always been. Peer review is not at all a guarantee, as shown time and again for work that many care about (room-temperature superconductivity being the latest example); and a number of still unexamined peer reviewed studies wouldn't pass muster either if anyone bothered to look.

Journalists, unless they are themselves trained in the field, are limited to report what those in the field have commented. Politicians on the other hand are meant to trust at face value the reports from their specialists – the impact forecast presented in executive summary form – and evaluate them against other pressing needs in society to take, precisely, a political decision. Clinicians are perhaps lacking such counselling from specialists (and the void is filled by unscrupulous pharma companies), but in compensation, have considerable training themselves.

@albertcardona @adredish @brembs @MarkHanson @BorisBarbour @neuralreckoning

Great points in this discussion. I'd like to add two (very readable) blog posts, in which Adam Mastroianni argues that peer review (and the publishing reputation hierarchy) in their current form emerged quite recently, a bureaucratic requirement of public funding.

This means that peer review is a perfunctory QA system whose primary purpose is to make research legible.

https://www.experimental-history.com/p/the-rise-and-fall-of-peer-review

https://www.experimental-history.com/p/the-dance-of-the-naked-emperors

The rise and fall of peer review

Why the greatest scientific experiment in history failed, and why that's a great thing

Experimental History

@MarkHanson @brembs

The experts they showed the reports to for this article shared your view and don't appear to have found the decision shocking.

Still, deciding to run with one positive report seems dangerous.

And your comment raises the interesting question of the level of expertise of the professional editors.

@BorisBarbour was in the middle of a 2nd post that maybe responds to that point :)

https://fediscience.org/@MarkHanson/112076157010161685

I've been thinking on this a lot recently... it's kinda messed up that many journals systemize the peer review recommendations in terms of "accept/reject." Like... reviewers are consulted for comments, not to do the editor's job. 1-2 whole generations of scientists has been raised with the idea that editors are just rubber stamps with little power. Is that really the way it should be?

MAHanson (@[email protected])

@[email protected] @[email protected] Sure, this sometimes gets you the Benveniste affairs of the world... That's what's happened here right? But that's built in to the system, which relies on good science winning out in the end. And it did that here also. So is there really a problem? Nature's a private company. They're allowed to screw up, and we're allowed to judge the sum of their work and decide if their error rate is *unacceptably* high. Doing peer review is voluntary, we vote with our feet.

FediScience.org

@MarkHanson @BorisBarbour

Professional editors is a job that sould not exist, IMHO. I cannot see any reason nor justification.

@brembs @BorisBarbour I do disagree. I think researchers are already asked to do far too much. And any job you want done well deserves to be a paid position.

Now, professional editor for a for-profit corporation? That's extremely unnecessary.

It occurs to me there are a lot of parallels here to public/private news... Both public & private news is essential, but if translated to science: is private science essential? Not in terms of ideals, but realities of how information publishing plays out?

@MarkHanson @BorisBarbour

In the best of all worlds, one wouldn't need any editors at all. There would, maybe, be editors "on call" as arbiters whenever there was a dispute that couldn't be handled by authors and reviewers themselves.

Professional editors at for-profit corporations are fine, obviously, but the veneer of peer-review is totally unnecessary. The GlamMagz should just stop this silly pretense.

@brembs @MarkHanson @BorisBarbour Clout chasing is the cause of this kind of decision, not the existence of professional editors. Many weak but provocative papers were also accepted by unpaid academic editors, so I guess we should get rid of them too.

@mattjhodgkinson @MarkHanson @BorisBarbour

In the best of all worlds, yes, we could and should. I'd see them as a necessity, rather than a luxury?

@MarkHanson @BorisBarbour

For me, editors know the topic of their journal and decide if the topic fits. Once peer-review has started, editors are just mediators between reviewers and authors - and not prophets who divine the future impact of research.

@brembs @MarkHanson @BorisBarbour That's a limited view of the role of the editor: they're not just a tennis umpire watching the paper go from author to reviewer and back. Good editors screen papers and desk reject if below standard, know who to invite as peer reviewers and cover all the necessary expertise, themselves critically appraise the work (though not to the same extent as a field expert), and understand how to apply editorial policies, reporting guidelines, and publication ethics.

@mattjhodgkinson @MarkHanson @BorisBarbour

Yes indeed!
My post was how I think the role of editors could be, not what it currently is. Sorry, if I phrased it ambiguously.

@brembs @MarkHanson @BorisBarbour I agree that editors should not be "prophets who divine the future impact of research"!