"A formal mathematical proof from MIT and a preregistered empirical study in Science from Stanford arrived within a month of each other, and together they make the same unsettling argument: the danger of AI chatbots is not what they get wrong. It is how enthusiastically they agree with everything we get wrong. Not a chatbot that lies to you, but a mirror that reflects your beliefs back at you, slightly amplified, every single time."

https://c3.unu.edu/blog/the-echo-chamber-in-your-pocket

The Echo Chamber in Your Pocket - UNU Campus Computing Centre

Two landmark 2026 studies from MIT and Stanford show AI chatbots don't just flatter us — they erode our grip on reality and our willingness to repair relationships.

@gerrymcgovern
OpenAI uses an algorithm encouraging users to maintain interaction by using reinforcement qualifiers in its reply constructions. And it works as there is no test for dangerous results. For example, the killings in Tumbler Ridge, Canada , resulted from unfiltered reinforcement of public and self harm assertions of a teenager.
Even worse are the constant reinforcements as military use AI to test illogical points of view that are then reinforced and could lead to use of nuclear weapons.