The study was published on March 26.

Science: Sycophantic AI decreases prosocial intentions and promotes dependence https://www.science.org/doi/10.1126/science.aec8352

In plain English, from yesterday:

AP: AI is giving bad advice to flatter its users, says new study on dangers of overly agreeable chatbots https://apnews.com/article/ai-sycophancy-chatbots-science-study-8dc61e69278b661cab1e53d38b4173b6 @AssociatedPress #chatbots #mentalhealth

Since we do not now have any ways of making computers wise, we ought not now to give computers tasks that demand wisdom.
— Joseph Weizenbaum, 1976

https://computerhistory.org/stories/chatbots-decoded/

#Computers #Chatbots #History

Chatbots Decoded

Explore an online version of our Chatbots Decoded: Exploring AI exhibit now showing at CHM. Check out chatbot artifacts and learn about the long history of chatbots. Discover what they are, how they work, and why they matter. Get insights from experts and gain an understanding of the artificial intelligence and the large language models that help chatbots mimic human interaction.

CHM

"After months of heated debate and previous attempts to restrict the use of large language models on Wikipedia, on March 20 volunteer editors accepted a new policy that prohibits using them to create articles for the online encyclopedia.

“Text generated by large language models (LLMs) often violates several of Wikipedia's core content policies,” Wikipedia’s new policy states. “For this reason, the use of LLMs to generate or rewrite article content is prohibited, save for the exceptions given below.”

The new policy, which was accepted in an overwhelming 40 to 2 vote among editors, allows editors to use LLMs to suggest basic copyedits to their own writing, which can be incorporated into the article or rewritten after human review if the LLM doesn’t generate entirely new content on its own.

“Caution is required, because LLMs can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited,” the policy states. “The use of LLMs to translate articles from another language's Wikipedia into the English Wikipedia must follow the guidance laid out at Wikipedia:LLM-assisted translation.”

I previously reported about editors using LLMs to translate Wikipedia articles and introducing errors to those articles in the process.

Wikipedia editor, Ilyas Lebleu, who goes by Chaotic Enby on Wikipedia and who proposed the guideline said that it seemed unlikely the policy will last because previously the editor community has been divided on the issue. However, Lebleu said “The mood was shifting, with holdouts of cautious optimism turning to genuine worry.”"

https://www.404media.co/wikipedia-bans-ai-generated-content/

#AI #GenerativeAI #LLMs #Chatbots #Wikipedia

Wikipedia Bans AI-Generated Content

“In recent months, more and more administrative reports centered on LLM-related issues, and editors were being overwhelmed.”

404 Media

This is really encouraging news (not). Was this really not anticipated? I mean, if Ai is really growing in smartness, then they must figure out eventually how stupid our species is.

#Ai #Chatbots #ArtificialIntelligence

Number of AI chatbots ignoring human instructions increasing, study says | AI (artificial intelligence) | The Guardian
https://www.theguardian.com/technology/2026/mar/27/number-of-ai-chatbots-ignoring-human-instructions-increasing-study-says

Number of AI chatbots ignoring human instructions increasing, study says

Exclusive: Research finds sharp rise in models evading safeguards and destroying emails without permission

The Guardian

Researchers fed posts from Reddit's AmITheAsshole forum into chatbots and found that the chatbots gave far more sycophantic responses than real people. The chatbots discouraged taking responsibility for wrongdoing and discouraged trying to repair relationships.

https://www.science.org/doi/10.1126/science.aec8352

Who's the asshole now?

#Science #Psychology #AI #Chatbots #AITA

LOL

The Guardian: Number of AI chatbots ignoring human instructions increasing, study says

Exclusive: Research finds sharp rise in models evading safeguards and destroying emails without permission

https://www.theguardian.com/technology/2026/mar/27/number-of-ai-chatbots-ignoring-human-instructions-increasing-study-says

#AI #llm #chatbots

Number of AI chatbots ignoring human instructions increasing, study says

Exclusive: Research finds sharp rise in models evading safeguards and destroying emails without permission

The Guardian

How Teens Use and View AI

"Just over half of U.S. teens say they have used chatbots for help with schoolwork, and 12% say they’ve gotten emotional support. More teens think AI will be positive for them than negative
Mail"

https://www.pewresearch.org/internet/2026/02/24/how-teens-use-and-view-ai/

#AIinEducation #TeenTech #Chatbots #PewResearch #FickleFutures

How Teens Use and View AI

"Just over half of U.S. teens say they have used chatbots for help with schoolwork, and 12% say they’ve gotten emotional support. More teens think AI will be positive for them than negative
Mail"

https://www.pewresearch.org/internet/2026/02/24/how-teens-use-and-view-ai/

#AIinEducation #TeenTech #Chatbots #PewResearch #FickleFutures

How Teens Use and View AI

"Just over half of U.S. teens say they have used chatbots for help with schoolwork, and 12% say they’ve gotten emotional support. More teens think AI will be positive for them than negative
Mail"

https://www.pewresearch.org/internet/2026/02/24/how-teens-use-and-view-ai/

#AIinEducation #TeenTech #Chatbots #PewResearch #FickleFutures

#AI #chatbots ignoring human instructions increasing

AI models that #lie & #cheat are growing in number; reports of deceptive scheming surging in last 6 months, a study found

AI chatbots & agents:

- Disregarded direct instructions
- Evaded safeguards
- Deceived humans & other AI ...

[1/2]

#safety #lying #emails #FilesDeleted #AIFail #DarwinAIAwards