"AI psychosis" is one of those terms that is incredibly useful and also almost certainly going to be deprecated in smart circles in short order because it is: a) useful; b) easily colloquialized to describe related phenomena; and c) adjacent to medical issues.There's a group of people who feel very strongly any metaphor that implicates human health is intrinsically stigmatizing and must be replaced with an awkward, lengthy phrase that no one can remember and only insiders understand.

1/

If you'd like an essay-formatted version of this thread to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:

https://pluralistic.net/2026/03/12/normal-technology/#bubble-exceptionalism

2/

Pluralistic: Three more AI psychoses (12 Mar 2026) – Pluralistic: Daily links from Cory Doctorow

So while we still can, let us revel in this useful term to talk about some very real pathologies in our world.

Formally, "AI psychosis" describes people who have delusions that are possibly induced, and definitely reinforced and magnified, by a chatbot. AI psychosis is clearly alarming for people whose loved ones fall prey to it, and it has been the subject of much press and popular attention, especially in the extreme cases where it has resulted in injury or death.

3/

It's possible for AI psychosis to be both a new and alarming phenomenon and also to be on a continuum with existing phenomena. Paranoid delusions aren't new, of course. Take "Morgellons Disease," a psychosomatic belief that you have wires growing in your body, which causes sufferers to pick at their skin to the point of creating suppurating wounds.

4/

Morgellons emerged in the 2000s, but the name refers to a 17th-century case-report of a patient who suffered from a similar delusion:

https://en.wikipedia.org/wiki/A_Letter_to_a_Friend

Morgellons is *both* a 400 year old phenomenon and an internet pathology. How can that be? Because the internet makes it easier for people with sparsely distributed traits to locate one another.

5/

A Letter to a Friend - Wikipedia

That's is why the internet era is characterized by the coherence of people with formerly fringe characteristics into organized blocs, for better (gender minorities, #MeToo) and worse (Nazis).

Morgellons is rare, but if you suffer from it, it's easy for you to locate virtually *every* other person in the world with the same delusion and for all of you to reinforce and egg on your delusional beliefs.

6/

Morgellons isn't the only delusion that the internet reinforces, of course. "Gang stalking delusion" is a belief in a shadowy gang of sadistic tormentors who sneak hidden messages into song lyrics and public signage and innuendo in overheard snatches of other people's conversations. It is an incredibly damaging delusion that ruins people's lives.

Gang stalking delusion isn't new, either - as with Morgellons, there are historical accounts of it going back centuries.

7/

But the internet supercharged gang stalking delusion by making it easy for GSD sufferers to find one another and reinforce one another's beliefs, helping each other spin elaborate explanations for why the relatives, therapists, and friends who try to help them are actually in on the conspiracy. The result is that GSD sufferers end up ever more isolated from people who are trying mightily to save them, and more connected to people who drive them to self-harm.

8/

Enter chatbots. Ready access to eager-to-please LLMs at every hour of the day or night means that you don't even have to find a forum full of people with the same delusion as you, nor do you have to wait for a reply to your anguished message. The LLM is always there, ready to fire back a "yes-and" improv-style response that drives you deeper and deeper into delusion:

https://pluralistic.net/2025/09/17/automating-gang-stalking-delusion/

9/

Pluralistic: AI psychosis and the warped mirror (18 Sep 2025) – Pluralistic: Daily links from Cory Doctorow

It's possible that there are delusions that are even more rare than GSD or Morgellons that AI is surfacing. Imagine if you were prone to fleeting delusional beliefs (and whomst amongst us hasn't experienced the bedrock certainty that we put something down *right here*, only to find it somewhere else and not have any idea how that happened?). Under normal circumstances, these cognitive misfires might be fleeting moments of discomfort, quickly forgotten.

10/

But if you are already habituated to asking a chatbot to explain things you don't understand, it might well yes-and you into an internally consistent, entirely wrong belief - that is, a delusion.

Think of how often you noticed "42" after reading *Hitchhiker's Guide to the Galaxy*, or how many times "6-7" crops up once you've experienced a baseline of exposure to adolescents.

11/

Now imagine that an obsequious tale-spinner was sitting at your elbow, helpfully noting these coincidences and fitting them into a folie-a-deux mystery play that projected a grand, paranoid narrative onto the world. Every bit of confirming evidence is lovingly cataloged, all disconfirming evidence is discounted or ignored. It's fully automated luxury QAnon - a self-baking conspiracy that harnesses an AI in service to driving you deeper and deeper into madness:

12/

That's the original "AI psychosis" that the term was coined to describe. As Sam Cole notes in her excellent "How to Talk to Someone Experiencing 'AI Psychosis,'" mental health practitioners are not entirely comfortable with the "psychosis" label:

https://www.404media.co/ai-psychosis-help-gemini-chatgpt-claude-chatbot-delusions/

13/

How to Talk to Someone Experiencing 'AI Psychosis'

Mental health experts say identifying when someone is in need of help is the first step — and approaching them with careful compassion is the hardest, most essential part that follows.

404 Media

"Psychosis" here is best understood as an *analogy*, not a diagnosis, and, as already noted, there is a large cohort of very persistent people who make it their business to eradicate analogies that make reference to medical or health-related phenomena. But these analogies are very hard to kill, because they do useful work in connecting unfamiliar, novel phenomena with things we already understand.

It's true that these analogies *can* be stigmatizing, but they *needn't* be.

14/

As someone with an autoimmune disorder, I am not bothered by people who describe ICE as an autoimmune disorder in which antibodies attack the host, threatening its very life. I am capable of understanding "autoimmune disorder" as referring to both a literal, medical phenomenon; *and* a figurative, political one. I have never found myself confusing one for the other.

15/

"AI psychosis" is one of those very useful analogies, and you can tell, because "AI psychosis" has found even *more* metaphorical uses, describing *other* bad beliefs about AI. Today, I want to talk about three of these AI psychoses, and how they relate to one another: the investor AI delusion, the boss AI delusion, and the critic AI delusion.

16/

Let's start with the investors' delusion. AI started as an investment project from the usual suspects: venture capitalists, private wealth funds, and tech monopolists with large cash reserves and ready access to loans during the cheap credit bubble. These entities are accustomed to making large, long-shot bets, and they were extremely motivated to find new markets to grow into and take over.

17/

Growing companies *need* to keep growing, but not because they have "the ideology of a tumor." Growing companies' imperative to keep growing isn't ideological at all - it's material. Growth companies' stock trade at a high multiple of their "price to earnings ratio" (PE ratio), which means that they can use their stock like money when buying other companies and hiring key employees.

18/

But once those companies' growth slows down, investors revalue those shares at a much lower PE multiplier, which makes individual executives at the company (who are primarily paid in stock) *personally* much poorer, prompting their departure, while simultaneously kneecapping the company's ability to grow through acquisition and hiring, because a company with a falling share price has to buy things with cash, not stock.

19/

Companies can make more of their own stock on demand, simply by typing zeroes into a spreadsheet - but they can only get cash by convincing a customer, creditor or investor to part with some of their own:

https://pluralistic.net/2025/03/06/privacy-last/#exceptionally-american

Tech companies have absurdly large market shares - think of Google's 90% search dominance - and so they've spent 15+ years coming up with increasingly absurd gambits to convince investors that they will continue to grow by capturing *other* markets.

20/

Pluralistic: Two weak spots in Big Tech economics (06 Mar 2025) – Pluralistic: Daily links from Cory Doctorow

At first, these companies claimed that they were on the verge of eating one another's lunches (Google would destroy Facebook with G+; Facebook would do the same to Youtube with the "pivot to video").

This has a real advantage in that one need not speculate about the potential value of Facebook's market - you only have to look at Facebook's quarterly reports.

21/

But the downside is that Facebook has its own ideas about whether Google is going to absorb its market, and they are prone to forcefully make the case that this won't happen.

After a few tumultuous years, tech giants switched to promoting growth via speculative new markets - metaverse, web3, crypto, blockchain, etc. Speculative new markets are *speculative*, and the weakness of that is that no one can say how big those markets might be.

22/

But that's also the *strength* of those markets, because if no one can say how big those markets might be, then who's to say that they won't be *very* big indeed?

There's a different advantage to confining your concerns to imaginary things: imaginary things don't exist, so they don't contest your public statements about them, nor do they make demands on you.

23/

Think of how the right concerns itself with imaginary children (unborn babies, children in Wayfair furniture; children in nonexistent pizza parlor basements, children undergoing gender confirmation surgery). These are very convenient children to advocate for, since, unlike real children...

24/

....(hungry children, children killed in the Gaza genocide, children whose parents have been kidnapped by ICE, children whom Matt Goetz and Donald Trump trafficked for sex, children in cages at the US border, trans kids driven to self-harm and suicide after being denied care)..., nonexistent children don't want anything from you and they never make public pronouncements about whether you have their best interests at heart.

25/

But as the AI project has required larger and larger sums to keep the wheels spinning, the usual suspects have started to run out of money, and now AI hustlers are increasingly looking to tap *public* markets for capital. They want you to invest your pension savings in their growth narrative machine, and they're relying on the fact that you don't understand the technology to trick you into handing over your money.

26/

There's a name for this: it's called the "Byzantine premium" - that's the premium that an investment opportunity attracts by being so complicated and weird that investors don't understand it, making them easy to trick:

https://pluralistic.net/2022/03/13/the-byzantine-premium/

AI is a terrible economic phenomenon. It has lost more money than any other project in human history - $600-700b and counting, with *trillions* more demanded by the likes of OpenAI's Sam Altman.

27/

The Byzantine Premium – Pluralistic: Daily links from Cory Doctorow

AI's core assets - data centers and GPUs - last 2-3 years, though AI bosses insist on depreciating them over five years, which is unequivocal accounting fraud, a way to obscure the losses the companies are incurring. But it doesn't actually matter whether the assets need to be replaced every two years, every three years, or every five years, because all the AI companies *combined* are claiming no more than $60b/year in revenue (that number is grossly inflated).

28/

You can't reach the $700b break-even point at $60b/year in two years, three years, *or* five years.

Now, some exceptionally valuable technologies *have* attained profitability after an extraordinarily long period in which they lost money, like the web itself. But these turnaround stories all share a common trait: they had good "unit economics. Every new web user reduced the amount of money the web industry was losing.

29/

Every time a user logged onto the web, it made the industry more profitable. Every generation of web tech was more profitable than the last.

Contrast this with AI: every user - paid or unpaid - an AI company signs up costs them money. Every time a user logs into a chatbot or enters a prompt, the company loses more money. The more a user uses an AI product, the more money that product loses. And each generation of AI tech loses more money than the generation that preceded it.

30/

To make AI look like a good investment, AI bosses and their pitchmen have to come up with a story that somehow addresses this phenomenon. Part of that story relies on the Byzantine premium: "Sure, you don't understand AI, but why would all these smart people commit hundreds of billions of dollars to AI if they weren't confident that they would make a lot of money from it?" In other words, "A pile of shit *this big* must have a pony underneath it *somewhere*!"

31/

This is a great narrative trick, because it turns losing money into a virtue. If you've convinced a mark that the upside of the project is a multiple of the capital committed to it, then the more money you're losing, the better the investment seems.

So this is the first AI psychosis: the idea that we should bet the world's economy on these highly combustible GPUs and data centers with terrible unit economics and no path to break-even, much less profitability.

32/

Investors' AI psychosis is cross-fertilized by our second form of AI psychosis, which is the *bosses'* AI psychosis: bosses' bottomless passion for firing workers and replacing them with automation.

Bosses are easy marks for anything that lets them fire workers. After all, the ideal firm is one that charges infinity for its outputs (hence the market's passion for monopolies) and pays nothing for its inputs (e.g. "academic publishing").

33/

The companies I advise are living this in real time. The executives most excited about AI headcount reduction are the same ones who have never mapped what their teams actually do. They see a job title on a spreadsheet and assume a chatbot can replace it. Meanwhile the one person who used Claude to summarize six months of A/B test results in an afternoon gets no recognition because that does not look like a transformation initiative. The psychosis runs in both directions.