This Guardian article https://www.theguardian.com/technology/2026/apr/29/claude-ai-deletes-firm-database suffers from the same trap of anthropomorphism as the original I read: https://oldbytes.space/@fluidlogic/116482496017786464
agent gone rogue
These tools have no concept of what a job is. They don't go rogue, they produce plausible text. Now complete idiots have wired them to command lines (the old school but still powerful way for humans to interact with computers) and APIs (programmatic mechanisms for interacting with a computer) and they produce plausible interactions. Some of which involve deleting databases.
The culprit was Cursor, an AI agent
The culprit was the idiot who wired the agent into their production system.
[Jeremy Crane posted on X how] the AI coding agent caused his business to unravel.
Jeremy Crane caused his own business to unravel.
The agent appeared to plead guilty in its own response
At last, an "appeared to". These tools are all appearance and no substance.
Crane’s takeaway was that “the agent didn’t just fail safety. It explained, in writing, exactly which safety rules it ignored.”
Wrong takeaway, my friend. The takeaway is that it generated more plausible text in response to your misguided attempt to discover its 'reasoning'. There is no reasoning. Just plausible text. The correct takeaway is that you should be charged in a court of law for negligence and wilful incompetence by the board of your company, and immediately fired.
And of course there's not a word in the article about any of the core problems I raise. Because journalists are just as bamboozled by this technology as the poor saps who implement agents in their business, thanks to the lying and deceit of the AI boosters.



