My god. Everyone needs to read this:

“The Reverse-Centaur’s Guide to Criticizing AI”

(from @pluralistic)

https://pluralistic.net/2025/12/05/pop-that-bubble/

Pluralistic: The Reverse-Centaur’s Guide to Criticizing AI (05 Dec 2025) – Pluralistic: Daily links from Cory Doctorow

@linux_mclinuxface @pluralistic Thank you, this is great! Half way through now. I love this bit:

"And because AI is just a word guessing program, because all it does is calculate the most probable word to go next, the errors it makes are especially subtle and hard to spot, because these bugs are literally statistically indistinguishable from working code (except that they're bugs)."

@macronencer @linux_mclinuxface @pluralistic

You are describing systems from pre 2025.
As of May 2025 there is a reasoning layer on most public frontier models.
Additionally many models fact check and provide clickable references.
Many models today are defacto #RAG systems rather than pure #LLM

It's perfectly fine to have formed an informed, robust opinion on a tech you don't use.
But as the tech rapidly processes, the baseline changes.
Increasingly, your opinion will be diverging from the facts.
Increasingly, your opinion will seem informed ONLY to other non users.

The other folk will see statements that describe ancient systems and understand that that opinion is no longer informed.

I understand Doctorow wrote a well regarded text "How to criticise AI"
For the sake of efficacy, I hope there is a "How to stay up to date" chapter.

#AI is a moving target.

#regulateAI

@macronencer @linux_mclinuxface @pluralistic I guess that's what bugs are, fundamentally. Chunks of code that look right, but when you dig into it, don't do what you want.

Until now we've only produced them by accident.

@negative12dollarbill Not all of them look right. Ever taken a look at the DailyWTF? :)

Interestingly, if you switch "code" to "writing" in the above, you've also summarised an analogous issue. Once, a colleague thought a draft email I'd shared with him was verbose, so asked ChatGPT to shorten it. It took 30% off, but also subtly changed the nuance of my meaning in three places.