After writing about people going into delusional spirals with ChatGPT and having what look like mental breakdowns, I wanted to understand exactly how it happens.

A corporate recruiter in Toronto who spent 3 weeks convinced by ChatGPT that he was essentially Tony Stark from Iron Man, agreed to share his transcript after breaking free of the delusion.

We analyzed the transcript & shared it with experts. Now you can see the interactions & how delusional spirals happen:
https://www.nytimes.com/2025/08/08/technology/ai-chatbots-delusions-chatgpt.html?unlocked_article_code=1.ck8.FEwL.MLb9ajaocyTx&smid=url-share

@kashhill thank you for looking into this! It seems pretty clear from even the earliest reported interactions what the problem is: this person trusts the LLM. He believes what it writes, at least as if it was a human writing it, and possibly even more than if it was a human. That is baffling to me.. But I guess because these tools have been hyped up so much and come from reputable companies, a non-informed person might spontaneously trust them?

This is the problem and this is what needs to be fixed. People need to know that the LLMs have no notion of truth and none of what they "say" can be trusted.

#LLM #genAI

@elduvelle @kashhill

They are well spoken, to which we are very sensitive. They have an opinion about everything we are ignorant of, which is intimidating. No doubt those in need of validation are also sensitive to that aspect.