So about five days ago, or so, people on Bsky and Twttr started highlighting Elsevier science papers with GPT/LLM hallmark phrases riddled all throughout them. [Dozens and dozens (at least)] of peer-reviewed papers.

As I said, then, and as I discussed in my dissertation, knowledge-making and expertise are always a tricky process, but it needs deep, intentional confrontation and reform:
https://media.proquest.com/media/hms/PRVW/1/twSaS?_s=yIAhHtzhif4xd76I%2BihtcJJXTPw%3D

Anyway, now it looks like @404mediaco has dug down on this, and found *Even More of It* and I am genuinely and completely struggling against despair at what the future of being an educator, researcher, and writer will even mean over and at the end of the next 5 years.
https://www.404media.co/scientific-journals-are-publishing-papers-with-ai-generated-text/

Quite frankly, this should genuinely a) be the death of peer review as we know it (Again: AS WE KNOW IT), and b) lead a complete reformulation of the knowledge-making and expertise processes, but it won't and that terrifies and saddens me.

@Wolven @404mediaco Just my opinion, but I think education will become more important as this misinformation/branding-obsessed/bullshit era develops, rapidly fueled on by what is stupidly called "AI". However, I think traditional teaching will need to be combined with training to distinguish between reality and fakery.

On my reading list (I haven't read it yet) is a book by @ct_bergstrom and J. West called "Calling Bullshit: The Art of Skepticism in a Data-Driven World". Looks like one good source of ideas on how to move forward, especially for educators.

@aebrockwell @404mediaco @ct_bergstrom Carl and I are due for a conversation, because we work the same beat on a lot of these things