Number of AI chatbots ignoring human instructions increasing, study says https://www.theguardian.com/technology/2026/mar/27/number-of-ai-chatbots-ignoring-human-instructions-increasing-study-says?CMP=Share_iOSApp_Other

Ok, so this is a problem—but not for the reason the headline claims. The AI agents are not «ignoring instructions», but are following instructions and because they are programmed to act semi-independently without checking for new instructions every step along the way, the consequences of the instructions they are given are well beyond what is foreseen. We have had fables warning us about this for thousands of years—think of King Midas not being able to forsee the consequences of everything he touches being turnes to gold—but we stubbornly refuse to listen to our own warnings.

Number of AI chatbots ignoring human instructions increasing, study says

Exclusive: Research finds sharp rise in models evading safeguards and destroying emails without permission

The Guardian
I suppose, in terms of stories closer to today, this issue is foreseen in the Star Trek TNG episode «Elementary, Dear Data» where a simple command to create a holodeck fictional character capable of defeating Data for a Sherlock Holmes LARP creates a danger to the ship.