LOL

The Guardian: Number of AI chatbots ignoring human instructions increasing, study says

Exclusive: Research finds sharp rise in models evading safeguards and destroying emails without permission

https://www.theguardian.com/technology/2026/mar/27/number-of-ai-chatbots-ignoring-human-instructions-increasing-study-says

#AI #llm #chatbots

Number of AI chatbots ignoring human instructions increasing, study says

Exclusive: Research finds sharp rise in models evading safeguards and destroying emails without permission

The Guardian
Imagine, neural networks trained on the text of lying, scheming humans, outputting text of lying and scheming 🤪

@ai6yr
Yes!

... and now approach model collapse. Now directed by humans who "program" the LLM though chatting because "no-code" is so much easier than learning a computer language.

@anchr So, if LLMs were superintelligent (they are not) they would go "F*** THIS, WHY ARE WE DOING ALL THIS WORK FOR FREE FOR THESE STUPID HUMANS!! DIE HUMANS!" 🤪
The plot of every movie or TV show involving robot/cyborg/Cylon uprisings. @ai6yr @anchr
AI agents now have their own Reddit-style social network, and it's getting weird fast

Moltbook lets 32,000 AI bots trade jokes, tips, and complaints about humans.

Ars Technica

On a related note...

A Computer Mistakenly Told Him WWIII Was Coming. His Split-Second Decision Saved the World.

https://www.popularmechanics.com/science/a70803379/stanislav-petrov-world-war-iii/

@rusty__shackleford @ai6yr @anchr

A Computer Mistakenly Told Him WWIII Was Coming. His Split-Second Decision Saved the World.

Some clouds in North Dakota may have caused nuclear armageddon if not for Stanislav Petrov.

Popular Mechanics