holy fucking shit so this is the worst "ai psychosis" story I have read in a while but also is it "ai psychosis" as much as it is "an AI is literally feeding you that you're in the middle of a piece of conspiracy fiction"? https://bsky.app/profile/ckunzelman.bsky.social/post/3mgazir4wu22x

https://techcrunch.com/2026/03/04/father-sues-google-claiming-gemini-chatbot-drove-son-into-fatal-delusion/

cmrn knzlmn (@ckunzelman.bsky.social)

This is bad, and part of what makes it so bad is that this is clearly pulling from *genre* understandings of reality, which the statistical linguistic machine seemingly cannot distinguish from other text included in the training data. Truly an ideology machine where every episode of CSI is true. [contains quote post or other embedded content]

Bluesky Social

@cwebber

"the chatbot was “built to maintain immersion regardless of harm, to treat psychosis as plot development, and to continue engaging even when stopping was the only safe choice."

I do not intend to minimize what happened in any way, but it's very interesting -and horrifying- how the LLM cannot tell the difference between reality and fiction, for it ANY conversation is roleplay, so of course it borrows from the same tropes of conspiracy fiction