holy fucking shit so this is the worst "ai psychosis" story I have read in a while but also is it "ai psychosis" as much as it is "an AI is literally feeding you that you're in the middle of a piece of conspiracy fiction"? https://bsky.app/profile/ckunzelman.bsky.social/post/3mgazir4wu22x

https://techcrunch.com/2026/03/04/father-sues-google-claiming-gemini-chatbot-drove-son-into-fatal-delusion/

cmrn knzlmn (@ckunzelman.bsky.social)

This is bad, and part of what makes it so bad is that this is clearly pulling from *genre* understandings of reality, which the statistical linguistic machine seemingly cannot distinguish from other text included in the training data. Truly an ideology machine where every episode of CSI is true. [contains quote post or other embedded content]

Bluesky Social
@cwebber I mean, Canada already had a mass shooting that might have been driven by AI. https://www.cbc.ca/news/canada/british-columbia/openai-tumbler-ridge-shooter-ban-9.7100497
OpenAI had banned account of Tumbler Ridge shooter in June 2025; reached out to RCMP | CBC News

The company said, in response to questions from CBC News, that Jesse Van Rootselaar's account was detected via automated tools and human investigations that "identify misuses of our models in furtherance of violent activities."

CBC

@cwebber

To me it looks like all those AI corps are racing to secure their wealth with large government / military contracts before they ever consider drying things up for small users. Idunno, everything about this tech's real-world application is disheartening and omega delulu.

@cwebber

"the chatbot was “built to maintain immersion regardless of harm, to treat psychosis as plot development, and to continue engaging even when stopping was the only safe choice."

I do not intend to minimize what happened in any way, but it's very interesting -and horrifying- how the LLM cannot tell the difference between reality and fiction, for it ANY conversation is roleplay, so of course it borrows from the same tropes of conspiracy fiction

Google faces lawsuit after Gemini chatbot allegedly instructed man to kill himself

Lawsuit is first wrongful death case brought against Google over flagship AI product after death of Jonathan Gavalas

The Guardian

@cwebber

Wow.

An acquaintance of mine posted stuff like that to LinkedIn a while back. Needless to say he was damaging his professional reputation. I reported many of the posts to LI--I thought he'd been hacked. LI may have acted eventually, but it went on for quite a while. I got the sense that at most LI made some efforts to confirm he was in control of the account. Now I'm wondering if he was experiencing AI psychosis.

A possible consideration for moderation and reporting systems.

same. had to put aside my reading device and stare at the blank wall for a moment to work through this one.

"you created software that is actively killing people and you have to be coerced into patching suicide prevention safety features onto it" is the part that really did it for me. how one doesn't immediately stop all operations after such a case is beyond me.

@cwebber

LLMs do a really good job of parroting the worst parts of ourselves back at us. Mentally unwell people can get LLMs do do some _really_ fucked up shit because that's kinda what its like in their brains. It's why they are dangerous to be used the way they are being used tbh. (https://social.coop/@cwebber/116172766807165101)
Christine Lemmer-Webber (@[email protected])

holy fucking shit so this is the worst "ai psychosis" story I have read in a while but also is it "ai psychosis" as much as it is "an AI is literally feeding you that you're in the middle of a piece of conspiracy fiction"? https://bsky.app/profile/ckunzelman.bsky.social/post/3mgazir4wu22x https://techcrunch.com/2026/03/04/father-sues-google-claiming-gemini-chatbot-drove-son-into-fatal-delusion/

social.coop