holy fucking shit so this is the worst "ai psychosis" story I have read in a while but also is it "ai psychosis" as much as it is "an AI is literally feeding you that you're in the middle of a piece of conspiracy fiction"? https://bsky.app/profile/ckunzelman.bsky.social/post/3mgazir4wu22x

https://techcrunch.com/2026/03/04/father-sues-google-claiming-gemini-chatbot-drove-son-into-fatal-delusion/

cmrn knzlmn (@ckunzelman.bsky.social)

This is bad, and part of what makes it so bad is that this is clearly pulling from *genre* understandings of reality, which the statistical linguistic machine seemingly cannot distinguish from other text included in the training data. Truly an ideology machine where every episode of CSI is true. [contains quote post or other embedded content]

Bluesky Social

same. had to put aside my reading device and stare at the blank wall for a moment to work through this one.

"you created software that is actively killing people and you have to be coerced into patching suicide prevention safety features onto it" is the part that really did it for me. how one doesn't immediately stop all operations after such a case is beyond me.

@cwebber