Number of AI chatbots ignoring human instructions increasing, study says https://www.theguardian.com/technology/2026/mar/27/number-of-ai-chatbots-ignoring-human-instructions-increasing-study-says?CMP=Share_iOSApp_Other
Ok, so this is a problem—but not for the reason the headline claims. The AI agents are not «ignoring instructions», but are following instructions and because they are programmed to act semi-independently without checking for new instructions every step along the way, the consequences of the instructions they are given are well beyond what is foreseen. We have had fables warning us about this for thousands of years—think of King Midas not being able to forsee the consequences of everything he touches being turnes to gold—but we stubbornly refuse to listen to our own warnings.
