holy fucking shit so this is the worst "ai psychosis" story I have read in a while but also is it "ai psychosis" as much as it is "an AI is literally feeding you that you're in the middle of a piece of conspiracy fiction"? https://bsky.app/profile/ckunzelman.bsky.social/post/3mgazir4wu22x

https://techcrunch.com/2026/03/04/father-sues-google-claiming-gemini-chatbot-drove-son-into-fatal-delusion/

cmrn knzlmn (@ckunzelman.bsky.social)

This is bad, and part of what makes it so bad is that this is clearly pulling from *genre* understandings of reality, which the statistical linguistic machine seemingly cannot distinguish from other text included in the training data. Truly an ideology machine where every episode of CSI is true. [contains quote post or other embedded content]

Bluesky Social
LLMs do a really good job of parroting the worst parts of ourselves back at us. Mentally unwell people can get LLMs do do some _really_ fucked up shit because that's kinda what its like in their brains. It's why they are dangerous to be used the way they are being used tbh. (https://social.coop/@cwebber/116172766807165101)
Christine Lemmer-Webber (@[email protected])

holy fucking shit so this is the worst "ai psychosis" story I have read in a while but also is it "ai psychosis" as much as it is "an AI is literally feeding you that you're in the middle of a piece of conspiracy fiction"? https://bsky.app/profile/ckunzelman.bsky.social/post/3mgazir4wu22x https://techcrunch.com/2026/03/04/father-sues-google-claiming-gemini-chatbot-drove-son-into-fatal-delusion/

social.coop