"A formal mathematical proof from MIT and a preregistered empirical study in Science from Stanford arrived within a month of each other, and together they make the same unsettling argument: the danger of AI chatbots is not what they get wrong. It is how enthusiastically they agree with everything we get wrong. Not a chatbot that lies to you, but a mirror that reflects your beliefs back at you, slightly amplified, every single time."

https://c3.unu.edu/blog/the-echo-chamber-in-your-pocket

The Echo Chamber in Your Pocket - UNU Campus Computing Centre

Two landmark 2026 studies from MIT and Stanford show AI chatbots don't just flatter us — they erode our grip on reality and our willingness to repair relationships.

@gerrymcgovern Goodness, just what we need now!

“The AI acts as a systematically biased evidence source. Over time, **it inflates our confidence in our own beliefs, even false ones until we can no longer distinguish conviction from truth**. Knowing this is happening does not fully protect us.”

@gerrymcgovern

All AI tech is designed (the algorithms and models) by humans and embraces their biases and shortcomings.

New version of original sin....

@gerrymcgovern

"Participants who spoke to the agreeable AI became more convinced they were right in their conflict, and significantly less willing to take actions to repair their relationships: to apologize, to reach out, to seek reconciliation."

When a chemical has this sort of impact on people, it gets put on lists only allowing very narrow uses.

@gerrymcgovern Dictators and dipshits love having their asses kissed and sucked up to, constantly. That's the only reason this fucking garbage caught on.
@gerrymcgovern Sensitive ground, since there's growing concern of increasing educational rifts, leaving too much ignorance, among more subservient masses.
@gerrymcgovern Sounds like current journalism and social networks.
@gerrymcgovern That is somehow not surprising. Trump will love it which may explain why Pam Bondi got fired and replaced with AI
@gerrymcgovern
OpenAI uses an algorithm encouraging users to maintain interaction by using reinforcement qualifiers in its reply constructions. And it works as there is no test for dangerous results. For example, the killings in Tumbler Ridge, Canada , resulted from unfiltered reinforcement of public and self harm assertions of a teenager.
Even worse are the constant reinforcements as military use AI to test illogical points of view that are then reinforced and could lead to use of nuclear weapons.
The Federal Government Is Rushing Toward AI. Our Reporting Offers Three Cautionary Tales.

We’ve been reporting on cybersecurity for years. As President Donald Trump and his Cabinet say artificial intelligence will transform the nation, the messaging isn’t new. It follows a familiar pattern.

ProPublica