So about five days ago, or so, people on Bsky and Twttr started highlighting Elsevier science papers with GPT/LLM hallmark phrases riddled all throughout them. [Dozens and dozens (at least)] of peer-reviewed papers.

As I said, then, and as I discussed in my dissertation, knowledge-making and expertise are always a tricky process, but it needs deep, intentional confrontation and reform:
https://media.proquest.com/media/hms/PRVW/1/twSaS?_s=yIAhHtzhif4xd76I%2BihtcJJXTPw%3D

Anyway, now it looks like @404mediaco has dug down on this, and found *Even More of It* and I am genuinely and completely struggling against despair at what the future of being an educator, researcher, and writer will even mean over and at the end of the next 5 years.
https://www.404media.co/scientific-journals-are-publishing-papers-with-ai-generated-text/

Quite frankly, this should genuinely a) be the death of peer review as we know it (Again: AS WE KNOW IT), and b) lead a complete reformulation of the knowledge-making and expertise processes, but it won't and that terrifies and saddens me.

@Wolven @404mediaco I mean my institution is *actively encouraging* academics to use LLMs in their work, it's part of the official strategy document, so it goes deeper than just flaws in peer review; a bunch of researchers have concluded that this is a legitimate way to do science 🤮
@jimbob @Wolven @404mediaco I think using LLMs in and of itself won't make a work illegitimate. One can think of it as a sophisticated autocomplete: if it suggests you write "iture" after you put "furn", it doesn't make your work illegitimate. It's harder to believe the analogy is true for LLMs tho, I bet it is
@sanfierro @Wolven @404mediaco had a student recently, whose written English was not great, who attempted to use a LLM to improve his writing on a paper - just employing it sentence by sentence to try to do this. The results were *much* worse than their writing had been without it... even on that small scale, small, nonsensical mistakes appeared.