The Iranian children killed in the airstrike on the Shajareh Tayyebeh primary school early on in the US/Israeli assault on Iran were not as the popular trope increasingly has it 'killed by AI' but rather were killed by years & years of lazy & inaccurate bureaucracy; hiding behind AI allows the real culprit(s) to enjoy impunity from an atrocity & likely war crime.

Kevin Baker's detailed exploration of what was behind the airstrike is a long read but worth it!

#iran
https://www.theguardian.com/news/2026/mar/26/ai-got-the-blame-for-the-iran-school-bombing-the-truth-is-far-more-worrying

AI got the blame for the Iran school bombing. The truth is far more worrying

LLMs-gone-rogue dominated coverage, but had nothing to do with the targeting. Instead, it was choices made by human beings, over many years, that gave us this atrocity

The Guardian
@ChrisMayLA6 Regardless of the method used to make a decision (including targeting during a war) you cannot duck the accountability - whether it is gut feel, Excel, AI or anything else the US Military is still accountable
@jschwa1 @ChrisMayLA6 I think this is one of the key problems with AI decision tools (also see self driving vehicles, and customer support chatbots that offer legally problematic advice). There's still a senior human that's legally accountable for the AI's decisions. I'm not convinced that governments and businesses are thinking this through. Unlike that low paid worker they fired to replace with AI, senior leaders can't shrug off mistakes and blame it on the worker.
@guigsy @ChrisMayLA6 Agreed. I fear that there will be some big AI failures, potentially leading to business/ societal failures that drive this home to senior leaders. However, for now, many appear to be blissfully ignorant 🙄
@jschwa1 @ChrisMayLA6 my general rule is to treat LLM based AIs as an over enthusiastic unpaid intern. It'll confidently agree to do anything and confidently produce output. But you really need to check it before the output goes anywhere. It does an 80% ok job... And recent releases of Gemini, ChatGPT, etc seem to be asymptotically levelling off on their accuracy. Like the current architecture will never get better than 90%. Which isn't really good enough when deciding which targets to bomb.
@guigsy @jschwa1 @ChrisMayLA6 This wasn’t a LLM through. Also, lots of people were involved after the AI they just didn’t question the result given.
@Kyebr @jschwa1 @ChrisMayLA6 ok, acknowledged. But it was still a black box algorithm with not enough oversight. Following the same mantra, that if you process fast enough, you'll net achieve more correct output... ignoring that you're also producing a higher volume of slop. A computer can't be accountable for the mistakes.
@guigsy @jschwa1 @ChrisMayLA6 I wasn’t defending LLM. Just pointing out that it’s not as simple as putting human in the loop. They have to actively question the assertion made by AI (or any computer in general)