"A formal mathematical proof from MIT and a preregistered empirical study in Science from Stanford arrived within a month of each other, and together they make the same unsettling argument: the danger of AI chatbots is not what they get wrong. It is how enthusiastically they agree with everything we get wrong. Not a chatbot that lies to you, but a mirror that reflects your beliefs back at you, slightly amplified, every single time."

https://c3.unu.edu/blog/the-echo-chamber-in-your-pocket

The Echo Chamber in Your Pocket - UNU Campus Computing Centre

Two landmark 2026 studies from MIT and Stanford show AI chatbots don't just flatter us — they erode our grip on reality and our willingness to repair relationships.

@gerrymcgovern AI chatbots always give you an answer, even if it's wrong, because they *have* to give you an answer.

If they were told to give you some sort of confidence score, say, "I'm 60% confident this is correct", you wouldn't use them. You'd just do your own research. You wouldn't base your results on source data that was possibly 60% truthful, right?

So they don't tell you how crappy their answers are because if they told you their answers were crap you wouldn't use them.