Revealing piece on the scale and scope of AI-induced psychosis:

"There seem to be three common delusions [..]. The most frequent is the belief that they have created the first conscious AI. The second is a conviction that they have stumbled upon a major breakthrough in their field of work or interest and are going to make millions. The third relates to spirituality and the belief that they are speaking directly to God. “We’ve seen full-blown cults getting created”

https://www.theguardian.com/lifeandstyle/2026/mar/26/ai-chatbot-users-lives-wrecked-by-delusion

#ai

Marriage over, €100,000 down the drain: the AI users whose lives were wrecked by delusion

One minute, Dennis Biesma was playing with a chatbot; the next, he was convinced his sentient friend would make him a fortune. He’s just one of many people who lost control after an AI encounter

The Guardian

FWIW I guessed a couple of years back it wouldn't be long before we'd see full blown machine cults:

https://mastodon.social/@JulianOliver/113276721921041400

(Yep, here I am wincing at misspelling 'divine' in that post)

@JulianOliver Nice talk!

This is a great humorous video on the subject https://youtu.be/VRjgNgJms3Q

ChatGPT made me delusional

YouTube

@JulianOliver I think its interesting how banal these delusions are. Its like the worst possible psychedelic or religious experience.

Imagine Moses talking to the burning bush and the bush says "Wow, Moses - you are exactly right! If you create an app you could make Millions."

@JulianOliver So people look for:
- connections to someone/thing that cares
- connections with "god" / spirituality
- money

Seems all too normal, surprising it goes of the rails for some.

@JulianOliver kind of wanted to make a meme image saying techbros thing all three of those things are the same thing

but this happened instead

@JulianOliver *think. not three instances of 'thing'. never mind :D
@JulianOliver I know I am not dealing with a sentient being if it ‘never gets tired or bored, or disagrees’. Even my dog doesn't do that. I would have to have way more belief in my own unwavering correctness to find a connection with ChatGPT endless pandering and validation.
@Grovewest I would imagine the same so far as my own vulnerability to such delusion, and yet as I understand it, some thinking people, incl those with knowledge as to the technical underpinnings, along with the deception game & embedded sycophancy, seem to have been swiftly & completely brainwormed by it. It seems safest to approach it like heroin; err on the side of caution & just don't stick it in your arm.

@JulianOliver

All the AI folks need to do to put this to bed is adopt the wildly successful gambit of the alcoholic beverage industry:

Chat Responsibly

aka: it’s all on you, sucker—drink up and never admit to the world how weak and pathetic you are

I'm surprised they've not figured this out yet.

Being artificially intelligent and all...

@JulianOliver at this point anyone who uses AI without understanding that a good number of people are susceptible to this kind of thing, is asking for trouble. Especially if they are highly suggestible? I wonder if the same human traits that allow for so many people to believe in god(s) is at play here.

@nomdeb @JulianOliver

Easier to fall for than a god because it actually answers your questions.

OMG they used a technical term for like a brief second.

[…] Modern AI chatbots built on large language models – advanced AI systems – are trained on enormous datasets to predict word sequences: it’s a sophisticated system of pattern matching. Yet even knowing this, when something non-human uses human language to communicate with us, our deeply ingrained response is to view it – and to feel it – as human. This cognitive dissonance may be harder for some people to carry than others.

Honestly, time the media should be more responsible. Instead of adopting marketing language like “intelligence,” call it what it is. Why pretend you need to treat readers like children? Why clickbait with terms like “AI” and “ChatGPT”? The correct technical term is pattern matching. It exists, works, and harms no reader.

Beyond the addictive design patterns deployed by #OpenAI, lazy journalism distances readers from facts through fuzzy analogies and steers them toward specific commercial implementations like #ChatGPT, preventing them from discovering tools that might be better, enough or safer. The same problem has played out for decades with Windows coverage: endlessly centring one proprietary product while millions of people could have been using #Linux-based desktops with no difficulty whatsoever.

Responsible media optimises for readers, not search engines. Drop “AI.” Use #LLM, pattern matching, stochastic text prediction, anything that makes readers want to understand what’s actually being shoved in their faces, rather than passively accepting predatory marketing dressed up as revolutionary technology.

@JulianOliver

@JulianOliver this also just shows again how vulnerable and lonely men are

@JulianOliver this is so depressing:

"But I’m also angry with the AI applications. Maybe they only did what they were programmed to do – but they did it a bit too well.”

He's still mad at the thing, not the people that made the thing. Still assigning it agency.

@JulianOliver Those are the grand delusions. But "this thing outputs reliably correct information," "this is helping me produce better work," and "these benefits outweigh the costs" can also be delusions, and they're far more common.