Today's threads (a thread)

Inside: Three more AI psychoses; and more!

Archived at: https://pluralistic.net/2026/03/12/normal-technology/

#Pluralistic

1/

Three more AI psychoses: Everyone calm down.

https://mamot.fr/@pluralistic/116219467129086436

2/

#15yrsago Notorious financier gets a “super-injunction” prohibiting the press from revealing that he is a banker https://www.telegraph.co.uk/finance/newsbysector/banksandfinance/8373535/Sir-Fred-Goodwin-former-RBS-chief-obtains-super-injunction.html

#10yrsago Shortly after her death, Harper Lee’s heirs kill cheap paperback edition of To Kill a Mockingbird https://newrepublic.com/article/131400/mass-market-edition-kill-mockingbird-dead

#10yrsago Web security company breached, client list (including KKK) dumped, hackers mock inept security https://arstechnica.com/information-technology/2016/03/after-an-easy-breach-hackers-leave-tips-when-running-a-security-company/

4/

Yesterday's threads: AI "journalists" prove that media bosses don't give a shit; and more!

https://mamot.fr/@pluralistic/116212586418986591

7/

My latest novel is "Picks and Shovels," a historical technothriller set in the Weird Era of the PC, about Ponzi schemes, techbros, and the dawn of enshittification:

https://us.macmillan.com/books/9781250865908/picksandshovels

--

My latest nonfiction book is the internationally bestselling "Enshittification: Why Everything Suddenly Got Worse and What to Do About It," from MCD/Farrar, Straus and Giroux:

https://us.macmillan.com/books/9780374619329/enshittification/

8/

Picks and Shovels

New York Times bestselling author Cory Doctorow returns to the world of Red Team Blues to bring us the origin story of Martin Hench and the most powerful new...

Macmillan Publishers

My ebooks and audiobooks (from FSGxMCD, Tor Books, Head of Zeus, McSweeneys, Beacon, Verso and others) are for sale all over the net, but I sell 'em too, and when you buy 'em from me, I earn twice as much and you get books with no DRM and no license "agreements."

https://craphound.com/shop/

9/

Upcoming appearances:

* #Barcelona: Enshittification with Simona Levi/Xnet (Llibreria Finestres), Mar 20
https://www.llibreriafinestres.com/evento/cory-doctorow/

* #Berkeley: Bioneers keynote, Mar 27
https://conference.bioneers.org/

* #Montreal: Bronfman Lecture (McGill) Apr 10
https://www.eventbrite.ca/e/artificial-intelligence-the-ultimate-disrupter-tickets-1982706623885

* #London: Resisting Big Tech Empires (LSBU)
https://www.tickettailor.com/events/globaljusticenow/2042691

10/

Upcoming appearances (cont'd):

* #Berlin: Re:publica, May 18-20
https://re-publica.com/de/news/rp26-sprecher-cory-doctorow

* #Berlin: Enshittification at Otherland Books, May 19
https://www.otherland-berlin.de/de/event-details/cory-doctorow.html

* #HayOnWye: HowTheLightGetsIn, May 22-25
https://howthelightgetsin.org/festivals/hay/big-ideas-2

11/

Recent appearances:

* Launch for Cindy's Cohn's "Privacy's Defender" (City Lights)
https://www.youtube.com/watch?v=WuVCm2PUalU

* Chicken Mating Harnesses (This Week in Tech)
https://twit.tv/shows/this-week-in-tech/episodes/1074

* The Virtual Jewel Box (U Utah)
https://tanner.utah.edu/podcast/enshittification-cory-doctorow-matthew-potolsky/

* Tanner Humanities Lecture (U Utah)
https://www.youtube.com/watch?v=i6Yf1nSyekI

* The Lost Cause
https://streets.mn/2026/03/02/book-club-the-lost-cause/

12/

You can follow these posts as a daily blog at pluralistic.net: no ads, trackers, or data-collection!

Here's today's edition: https://pluralistic.net/2026/03/12/normal-technology/

--

If you prefer a newsletter, subscribe to the plura-list, which is ad-/tracker-free, and is utterly unadorned save a single daily emoji. Today's is "🦏". Suggestions solicited for future emojis!

--

You can also get a fulltext RSS feed, licensed CC BY 4.0:

https://pluralistic.net/feed/

13/

Pluralistic: Three more AI psychoses (12 Mar 2026) – Pluralistic: Daily links from Cory Doctorow

I'm also on Bluesky. Read today's thread there at:

https://bsky.app/profile/doctorow.pluralistic.net/post/3mgvwg3ww6k2k

eof/

By Cory Doctorow (GPG 0xBF3D9110957E5F4C) (@doctorow.pluralistic.net)

"AI psychosis" is one of those terms that is incredibly useful and also almost certainly going to be deprecated in smart circles in short order because it is: a) useful; b) easily colloquialized to describe related phenomena; and c) adjacent to medical issues. 1/

Bluesky Social
@pluralistic if “psychosis” is a medical term and “delusion” is not so much and you seem (to me as a non-native English speaker) to be using them somewhat interchangeably, why not just stick to the latter?
@steelman @pluralistic stuff like this reinforces my paranoia about a larger conspiracy. A la The Epstein files. I don’t find it a coincidence that the files are released around the time of Gen AI, so nobody can really tell what the truth is (especially those who have been victims of abuse) and those in power get to decide what happens. You can’t make this shit up. Medicine or science for that matter is NOT objective or neutral in any sense.

@pluralistic

> There's nothing about AI per se that makes it exceptionally

There is: power, as in dE/dt. I can't think of any other technology in the last 100 years, that used so much energy in so little time to do almost nothing usable. Mostly energy efficiency was improving.

I can't think of any other technology that would by so half-baked when turned into a product. 👉

@pluralistic Last but not least: accuracy and precision. Every part of our technological stack from a chisel to a spacecraft works based on precision and our knowledge of its limits. Quick example: GPS receiver not only indicates its position but communicates how accurate this indication is. Thus, a user knows if the reading is sufficiently precise for the task at hand. LLM-s hallucinate without any indicator they are doing so. We can't use them as a tool to further develop our technology.
@steelman @pluralistic Very much agree. In most domains, one plausible-but-wrong output can cause enough harm to outweigh the benefit of many hundreds or thousands of correct outputs. Even if you have an accuracy of 90% (which I'm skeptical of in the real world but is the sort of number I often hear trotted out), that's an error rate of 1 in 10 which is orders of magnitude worse than acceptable for most applications
@LynneOfFlowers the problem is deeper than “it's wrong 1 out of 10”. we don't know how often (and how much) these devices are wrong and we've got no way of telling other than evaluating every(!) answer they return (see the economy argument, the cost of using(!) them grows the more we use them). AFAIK there is no proof nor even a conjecture they follow one or another statistical distribution. This is what makes them unmanageable, and thus exceptional as a technology.
@pluralistic
@steelman A very good point. The error rate can vary wildly based on exact domain, input distribution, phase of the moon, whether you said "please" before or after your request, etc. So maybe your test data gives some error rate that you deem acceptable, but then in production the error rate and even the kinds of errors are much different, with no way to predict this. And you can't spot-check llms, a few correct outputs doesn't mean it's "got the idea" and that you can trust the rest
@pluralistic I enjoyed this article very much, as usual from your post. However, (the dreaded shift) i believe your critique of the AI critic psychoses to fall a bit short. Some sophistry with words that i see is the lack of distinction between AI and LLM. LLMs rely on huge companies doing all sorts of real world damage. Our use of LLMs does support this damage. I think of LLMs as like CFCs. There is good reason to ban them or seek safer substitutes. LLMs are not just another tool