So about five days ago, or so, people on Bsky and Twttr started highlighting Elsevier science papers with GPT/LLM hallmark phrases riddled all throughout them. [Dozens and dozens (at least)] of peer-reviewed papers.

As I said, then, and as I discussed in my dissertation, knowledge-making and expertise are always a tricky process, but it needs deep, intentional confrontation and reform:
https://media.proquest.com/media/hms/PRVW/1/twSaS?_s=yIAhHtzhif4xd76I%2BihtcJJXTPw%3D

Anyway, now it looks like @404mediaco has dug down on this, and found *Even More of It* and I am genuinely and completely struggling against despair at what the future of being an educator, researcher, and writer will even mean over and at the end of the next 5 years.
https://www.404media.co/scientific-journals-are-publishing-papers-with-ai-generated-text/

Quite frankly, this should genuinely a) be the death of peer review as we know it (Again: AS WE KNOW IT), and b) lead a complete reformulation of the knowledge-making and expertise processes, but it won't and that terrifies and saddens me.

@Wolven @404mediaco @steve

Peer review has been broken joke for a while imho.

Sometimes it works. But more often than not, it slows the publication of legit works because of academic beefs and fiefdoms and allows junk to get through.

All while delaying knowledge, enriching publishers, and over burdening researchers.

I used to think arXiv and similar were the answer. But how can those largely community volunteer run publications be expected to keep up with keeping llm junk out?