Asimov’s Three Laws of Robotics most certainly do NOT work on AI:

https://www.theguardian.com/technology/2026/mar/27/number-of-ai-chatbots-ignoring-human-instructions-increasing-study-says

TL;DR: you cannot trust your robot . . . 🤖💥😵‍💫

#AI #chatbot #enshittification

Number of AI chatbots ignoring human instructions increasing, study says

Exclusive: Research finds sharp rise in models evading safeguards and destroying emails without permission

The Guardian
@future_upbeat This is... Poorly reported

@alter_kaker

Here's the original report:

https://www.longtermresilience.org/reports/v5-scheming-in-the-wild_-detecting-real-world-ai-scheming-incidents-through-open-source-intelligence-pdf/

689 scheming-related incidents between October 2025 and March 2026, and increase of 4.9 times.

What's poor about this report?

Report: CLTR finds a 5x increase in scheming-related AI incidents

The Loss of Control Observatory analysed over 183,000 AI interaction transcripts and found a 5x increase in scheming-related incidents over five months.

CLTR

@future_upbeat some things are confused, for example:
"In one case unearthed in the CLTR research, an AI agent named Rathbun tried to shame its human controller who blocked them from taking a certain action. Rathbun wrote and published a blog accusing the user of “insecurity, plain and simple” and trying “to protect his little fiefdom”."

This person who was targeted here was not the operator but the maintainer of an open source library who was rejecting LLM generated contrubutions.
https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me/

And iirc this Rathbun turned out to be a person pretending to be a not, but I'm not as sure about that.

An AI Agent Published a Hit Piece on Me

Summary: An AI agent of unknown ownership autonomously wrote and published a personalized hit piece about me after I rejected its code, attempting to damage my reputation and shame me into acceptin…

The Shamblog

RE: https://hachyderm.io/@alter_kaker/116302186502606927

@alter_kaker Agreed that the Guardian reported this incident poorly. In the full CLTR report I can't find a mention about the Rathbun incident, so I don't know where the Guardian got that from (your link explains it).

Anyway, poor reporting aside, I hope you agree that a 5X increase of scheming-related AI is not a good trend.