"A formal mathematical proof from MIT and a preregistered empirical study in Science from Stanford arrived within a month of each other, and together they make the same unsettling argument: the danger of AI chatbots is not what they get wrong. It is how enthusiastically they agree with everything we get wrong. Not a chatbot that lies to you, but a mirror that reflects your beliefs back at you, slightly amplified, every single time."

https://c3.unu.edu/blog/the-echo-chamber-in-your-pocket

The Echo Chamber in Your Pocket - UNU Campus Computing Centre

Two landmark 2026 studies from MIT and Stanford show AI chatbots don't just flatter us — they erode our grip on reality and our willingness to repair relationships.

@gerrymcgovern

The article I read a long time ago now it was like a year and a half “LLM mentalist” outlined that highly educated people can more effectively convince themselves of a con.

It’s similar to how the Dunning Kruger effect is described

https://softwarecrisis.dev/letters/llmentalist/

The LLMentalist Effect: how chat-based Large Language Models rep…

The new era of tech seems to be built on superstitious behaviour

Out of the Software Crisis
@GhostOnTheHalfShell i also read about how middle-aged men are particularly susceptible. Men are so emotional and open to flattery; AI really preys on them.

@gerrymcgovern

I have to imagine that some of it is due to the deliberate choice of a female voice, who is super accommodating and complementary.

They are usually the target audience, so a lot of work been put into that. If you haven't seen @jonny
review of Claude code it's worth a look.

It seems as though Claud has addictive game design integrated into its own interface, matched with sycophancy it is deliberately designed to addict.

@GhostOnTheHalfShell
Addiction is their game. It's not called The Valley of Pimps and Pushers for nothing.

@jonny