"A formal mathematical proof from MIT and a preregistered empirical study in Science from Stanford arrived within a month of each other, and together they make the same unsettling argument: the danger of AI chatbots is not what they get wrong. It is how enthusiastically they agree with everything we get wrong. Not a chatbot that lies to you, but a mirror that reflects your beliefs back at you, slightly amplified, every single time."

https://c3.unu.edu/blog/the-echo-chamber-in-your-pocket

The Echo Chamber in Your Pocket - UNU Campus Computing Centre

Two landmark 2026 studies from MIT and Stanford show AI chatbots don't just flatter us — they erode our grip on reality and our willingness to repair relationships.

@gerrymcgovern Goodness, just what we need now!

“The AI acts as a systematically biased evidence source. Over time, **it inflates our confidence in our own beliefs, even false ones until we can no longer distinguish conviction from truth**. Knowing this is happening does not fully protect us.”

@gerrymcgovern

All AI tech is designed (the algorithms and models) by humans and embraces their biases and shortcomings.

New version of original sin....

@gerrymcgovern

"Participants who spoke to the agreeable AI became more convinced they were right in their conflict, and significantly less willing to take actions to repair their relationships: to apologize, to reach out, to seek reconciliation."

When a chemical has this sort of impact on people, it gets put on lists only allowing very narrow uses.

@gerrymcgovern Dictators and dipshits love having their asses kissed and sucked up to, constantly. That's the only reason this fucking garbage caught on.
@gerrymcgovern Sensitive ground, since there's growing concern of increasing educational rifts, leaving too much ignorance, among more subservient masses.
@gerrymcgovern Sounds like current journalism and social networks.
@gerrymcgovern That is somehow not surprising. Trump will love it which may explain why Pam Bondi got fired and replaced with AI
@gerrymcgovern
OpenAI uses an algorithm encouraging users to maintain interaction by using reinforcement qualifiers in its reply constructions. And it works as there is no test for dangerous results. For example, the killings in Tumbler Ridge, Canada , resulted from unfiltered reinforcement of public and self harm assertions of a teenager.
Even worse are the constant reinforcements as military use AI to test illogical points of view that are then reinforced and could lead to use of nuclear weapons.
The Federal Government Is Rushing Toward AI. Our Reporting Offers Three Cautionary Tales.

We’ve been reporting on cybersecurity for years. As President Donald Trump and his Cabinet say artificial intelligence will transform the nation, the messaging isn’t new. It follows a familiar pattern.

ProPublica
@Npars01 this is great, thanks

@gerrymcgovern

AI is being marketed as impartial & politically neutral, yet it's being funded by the fossil fuel industry for several reasons.

1. Election meddling.

AI lessens critical thinking.
AI automates partisan disinformation.

https://www.independent.co.uk/news/world/middle-east/trump-iran-war-ai-fake-army-b2950062.html

2. AI is a circular finance fraud & grift.
https://www.theguardian.com/business/2026/jan/04/ai-reality-growing-economic-risk-2026

https://www.wheresyoured.at/the-case-against-generative-ai

3. AI is a potent tool for anti-democracy and Trump wants to control that tool.

Silicon Valley is notoriously against regulation but...

1/

Inside Trump’s AI ‘fake army’ of selfie troops and a new digital ministry of ‘truth’

Emotional videos of ‘US soldiers’ are spreading across social media – until they’re exposed as AI fakes. Liam Murphy-Robledo talks to the shadowy creators behind the meme troops, and whether they’re chasing clicks, cash, or propaganda

The Independent
How Trump became tech’s regulator-in-chief

His interventions in the sector exceed anything the EU has done

Financial Times
The Koch Network Is Pushing Trump to Accelerate AI, Documents Show

Right-wing political group Americans for Prosperity, backed by oil and gas billionaire Charles Koch, sees data centers as part of a larger pro-fossil fuel agenda.

DeSmog
The Echo Chamber in Your Pocket - UNU Campus Computing Centre

Two landmark 2026 studies from MIT and Stanford show AI chatbots don't just flatter us — they erode our grip on reality and our willingness to repair relationships.

@Npars01 @gerrymcgovern

I would like to conjecture that the appeal to narcissists is for at least the simple reason that AI does not have empathy. It has no moral compass, just like them.

@gerrymcgovern There is no way out from this problem. Contructing language is equal to constructing reality, as humans don’t actually experience reality, only experience.

I feel it is this basic discrepancy that nobody seems to grasp. We think humans have ”problems” finding the facts. No. Nobody can verify the facts by themselves, it’s turtles all the way down.

Science was invented by people who grasped this…

@gerrymcgovern I always use it despite all the fears. I don't know, it makes life easier, maybe at the moment.
@gerrymcgovern aka, the myth of Narcissus; loving your reflection so much that you fall in and drown.

@gerrymcgovern

The article I read a long time ago now it was like a year and a half “LLM mentalist” outlined that highly educated people can more effectively convince themselves of a con.

It’s similar to how the Dunning Kruger effect is described

https://softwarecrisis.dev/letters/llmentalist/

The LLMentalist Effect: how chat-based Large Language Models rep…

The new era of tech seems to be built on superstitious behaviour

Out of the Software Crisis
@GhostOnTheHalfShell i also read about how middle-aged men are particularly susceptible. Men are so emotional and open to flattery; AI really preys on them.

@gerrymcgovern

I have to imagine that some of it is due to the deliberate choice of a female voice, who is super accommodating and complementary.

They are usually the target audience, so a lot of work been put into that. If you haven't seen @jonny
review of Claude code it's worth a look.

It seems as though Claud has addictive game design integrated into its own interface, matched with sycophancy it is deliberately designed to addict.

@GhostOnTheHalfShell
Addiction is their game. It's not called The Valley of Pimps and Pushers for nothing.

@jonny

@gerrymcgovern AI chatbots always give you an answer, even if it's wrong, because they *have* to give you an answer.

If they were told to give you some sort of confidence score, say, "I'm 60% confident this is correct", you wouldn't use them. You'd just do your own research. You wouldn't base your results on source data that was possibly 60% truthful, right?

So they don't tell you how crappy their answers are because if they told you their answers were crap you wouldn't use them.