The tipping point for me deciding that we should abolish scientific journals was my first experiences as an editor. It made me feel both scientifically and morally uncomfortable to be making what I felt were necessarily ill informed decisions that affected scientific progress and people's careers.

I don't think it's possible to be knowledgeable enough to quickly read a paper and understand its potential significance. But the entire system rests on this foundation.

I could also see that if I kept doing it, I'd gain confidence and it would start to feel normal, even though it was still wrong.

So instead I decided to quit those roles, and campaign for an end to scientific journals and help build new systems without the many flaws of journals.

I do believe in the value of scientific expertise, of course, but I don't think we can predict what will turn out to be important or valuable in the future. Too much of scientific culture and our institutions are based on what seems to me the palpably false assumption that we can.

@neuralreckoning Do you have a particular lead on what it is that people can share that isn't publishing megacorporate journal articles? Write proofs that don't need to be vetted in some senses? What do you think of Edrx's https://anggtwu.net/e/ viz dense reuseable knowledge sharing? Or what was your direction?
e

@screwlisp There are several journals, like those of the European Geophysical Union, which publish manuscripts online as soon as they are submitted and allow public comment. The manuscript is still reviewed by subject area experts, and those reviews are posted as comments. Authors have to address both subject area expert reviews and comments in order to get a revised version published, with a different DOI. The comments and reviews, as well as the first submitted draft, stay published.

This is similar to the preprint model, but I have noticed a paucity of public comment on these manuscripts. Some consider it rude to write an unsolicited review, however, to a system like what @neuralreckoning has in mind, that has to become the norm. I would very much like to see this sea change.

@Brad_Rosenheim @screwlisp @neuralreckoning

This is absolutely the right approach. Publication should be regarded as the start of peer review, not the end.

@david_chisnall @Brad_Rosenheim @screwlisp @neuralreckoning I also find it useful if reviewer identities is also public. Although double blind has its advantages, the more specific the research topic gets, the higher the chances that people recognize each other just by the topic and their writing style. Staying anonymous gives way to abuse.

We had a paper that was rejected for almost 2 years at different venues because it contradicted with claims from another research group.

@Brad_Rosenheim @screwlisp oh this is so interesting, I had no idea that some journals were already this far along with experimenting with this model. Will look into that.

@neuralreckoning
Here is a paper that a recent Ph.D. graduate working in my research group just published. It went through several rounds of review (it was a nailbiter) but received no public comments.

https://bg.copernicus.org/articles/21/5361/2024/bg-21-5361-2024-discussion.html

@screwlisp

Deep-sea stylasterid δ18O and δ13C maps inform sampling scheme for paleotemperature reconstructions

Abstract. Deep-sea corals have the potential to provide high-resolution paleotemperature records to evaluate oceanographic changes in settings that are vulnerable to current and future ocean warming. The isotopic records preserved in coral skeletal carbonate, however, are limited by their large offsets from isotopic equilibrium with seawater. These “vital effects” are the result of biological influences (kinetic and metabolic) on the calcification of coral skeletons and are well known to drive oxygen and carbon stable isotope ratios (δ18O and δ13C, respectively) away from isotopic equilibrium with environmental variables. In this study, two calcitic stylasterid corals (Errina fissurata) are sampled via cross sections through their primary growth axes to create skeletal δ18O and δ13C maps. The maps reveal a consistent trend of increasing isotopic values toward the innermost portion of the cross sections, with minimal spatial change in carbonate mineralogy, the average center values being ∼1 ‰ and ∼3 ‰ closer to seawater δ18O and δ13C equilibrium values, respectively. We investigate possible mechanisms for these isotopic trends, including potential growth patterns that would drive spatial isotopic trends. Our results highlight the diversity of the stylasterid coral family, and because of our unique sampling strategy, we can prescribe that E. fissurata corals with minimal mineralogical variability be sampled from the center portions of their stems to achieve accurate paleotemperature reconstructions.

@Brad_Rosenheim @neuralreckoning @screwlisp

Hey Brad, looks like the link is broken. It takes me to a 404 error.

@johnb48 @neuralreckoning @screwlisp

Thanks for pointing that out. I was transcribing from another screen and some students came in for a meeting, and I hit send without checking it. Will fix after dinner!

Edit: fixed link

@screwlisp the problem is that right now people should be writing megacorp journal articles because that's what other scientists read. We need to change the culture around this. For a start, we should be doing more with preprints. In the longer term we need to find new mechanisms that allow for error correction without a journal involved. That's going to be harder. My attempt (very early stages) is https://scholar.nexus/ but it's not a functional product yet, just an idea.
Scholar Nexus

@neuralreckoning @screwlisp

> the problem is that right now people should be writing megacorp journal articles because that's what other scientists read

It seems to me that we should add
... and because that counts for their careers.
Metrics, etc.
(Is impact factor still a metric of the day?
Things like SCOPUS are disproportionately significant.)

@neuralreckoning @screwlisp I don't think that e.g. mathematicians read megacorp journals much - they rather read arXiV preprints.

It's the bean counters who want these journal papers.

@dimpase @neuralreckoning @screwlisp

> the bean counters

I think in this context beans are scientometrics numbers.

@dimpase @screwlisp yeah things are better in maths and also in theoretical physics.
@neuralreckoning What do you recommend for ECRs who want to avoid corporate journals but also want a job?
@Talia right now there's no alternative. If you want a job you need to publish in those journals. The best we can do for the moment is to start creating an alternative ecosystem by publishing, sharing and potentially giving peer feedback on preprints.

@neuralreckoning

Hence huge biases towards known scientists, those the editor met at a poster session, or heard their talk and chatted to over coffee afterwards.

I've had scientists thank me for inviting journal editors to conferences I organised, precisely because it led to them publishing in their journals through that personal connection.

When the biorxiv arrived I felt an immense stream of fresh air. And the eLife new reviewer preprint publication model, despite being just as susceptible to these personal relationships, feels right as an editor – what's interesting is how much editors bring from their old model into this, whereas my approach is to send out for review any manuscript that is a proper, legit attempt at rigorous scientific research, which I claim depends largely on how well and detailed is the Method section written.

A whole other matter is finding suitable reviewers: everyone want to publish but not to review. Which is only logical, given incentives and the academic reward system. Moving into a scientific publication system where papers aren't reviewed seems inevitable. Author reputation will be more important than ever, with all that it implies, particularly for early career scientists.

#ScientificPublishing

@albertcardona yes we need to be developing the systems now that can offset the author reputation effect. For me, tools like semantic scholar are already starting to do this. It shows me papers from people I've never heard of that I'm super interested in, purely based on the content.
@neuralreckoning I wonder how these flaws can be avoided in any system without inviting a flood of dirt to sweep over and bury the genuine contributions. Do you have any ideas on that?
@hllizi post publication peer review! More details at https://scholar.nexus/
Scholar Nexus

@neuralreckoning are there any alternatives in practice, either now or from before ?

@nopsled

I can't speak to all of the original poster's potential reasons for complaining, but there are already journals that have done away with the significance criterion entirely.

That is, they only evaluate incoming articles on the quality of the work and the relevance to the journal in the sense of being on topic. Reviewers are specifically instructed not to evaluate significance. I've reviewed for some of them.

Journals *can* just choose to remove that factor.

@neuralreckoning

@doctorambient @neuralreckoning

I see, how can reviewer decide on significance, sounds like bs to me. That actually inhibits progress.

Sorry, I don't have science background, but it looks like that to me.

@nopsled

I agree that significance is hard to evaluate. That's why the journals that I was talking about *eliminated* evaluating significance as a requirement.

I don't personally know anyone in science who thinks that significance is something that we should be evaluating. So this is a concept that is fading away already.

My point was that the OP makes it sound like there's no way to eliminate evaluating significance in journals, and that's just not true. They can just stop.
@neuralreckoning

@doctorambient @nopsled these journals are not the biggest problem, I agree. But they're also not the ones that get you a job. I'm also not sure you can really only evaluate 'quality of the work'. I certainly can't define that. Methodological soundness? Maybe. Even that's pretty subjective and I bet you'd find that reviewer estimations of methodological soundness are influenced by irrelevant factors (author reputation, writing quality, etc.).

@neuralreckoning @doctorambient

> But they're also not the ones that get you a job.

Which is a big deal actually.

> I'm also not sure you can really only evaluate 'quality of the work'. I certainly can't define that. Methodological soundness? Maybe. Even that's pretty subjective and I bet you'd find that reviewer estimations of methodological soundness are influenced by irrelevant factors (author reputation, writing quality, etc.).

I think methodology is most important. For example if sample is small, or questions are biased. But now when you mention it, I'm also not sure about that haha.

Once I read, "No person is objective, just methodology can be objective"

But again I'm not a scientist, so I don't have much clue about those things

@nopsled yes but not ones that are considered prestigious enough to get you a job, so people don't tend to send what they consider to be their best papers to them.
@neuralreckoning A science adjacent glassblower, I remember the 'reliability', 'reproducibility' and 'replication' discussion starting in the 90s. Wikipedia places it in this century. A "AI" lit review puts it in the 60s.
One potential solution: "Scientific publishing has begun using pre-registration reports to address the replication crisis. The registered report format requires authors to submit a description of the study methods and analyses prior to data collection."
https://en.wikipedia.org/wiki/Replication_crisis
Replication crisis - Wikipedia

@mcorbettwilson I'm not entirely convinced by pre-registration, but I do agree that traditional peer review as a way to evaluate scientific work is deficient and leads to all sorts of problems.

@neuralreckoning
I never published a scientific paper but i think i can relate.
At my last job i would write process documents for people to follow that ensured confirming products. Editors on the other side of the country that never looked at the things I'm describing would change my words to make the document "correct" in their view, but it didn't describe a process that produced things that confirmed to our requirements anymore.
And they would take weeks to mess these up so like, do I want their messed up idea of my process or do i want to have no process at all for several weeks?

And they don't even take credit for changing anything! It just looks like I'm stupid when they're done 🫠

@neuralreckoning
I didn't even know it could be a thing that people would just write stuff into quality documents without telling anyone and without putting their name to it, I had kind of assumed that would be unethical

@neuralreckoning
Somebody (can't remember the name) once put part of the problem succinctly:

The university system has outsourced its employee evaluation to the publishers by way of the journal publication system.

To change the publication system, you need to find a different solution to this problem for the universities, or the system won't change. And it's a *hard* problem - who do you pick for a faculty position if papers and citation counts are not an indicator?

@neuralreckoning I thought the aim of a scientific editor is not to predict the impact of the papers, but to check whether there are mistakes in the papers, and whether they're scientifically sound.
@PeterMotte that's true at some journals, but not the sort that determine people's careers.
@neuralreckoning I feel like this about interviewing candidates for job positions. The process seem hopelessly random and unreasonable.

@neuralreckoning not to mention many journals are absurldy #paywalled (#Elsevier are just the greediest #rentseekers!) whilst also charging for #submissions to the point that some are existing mostly as a means of #corporations to commit "#AssetDenial" against competitiors by #publishing their own research, thus enshuring the #competition can't #patent a specific product.

  • Don't ask me how I know...