The Iranian children killed in the airstrike on the Shajareh Tayyebeh primary school early on in the US/Israeli assault on Iran were not as the popular trope increasingly has it 'killed by AI' but rather were killed by years & years of lazy & inaccurate bureaucracy; hiding behind AI allows the real culprit(s) to enjoy impunity from an atrocity & likely war crime.

Kevin Baker's detailed exploration of what was behind the airstrike is a long read but worth it!

#iran
https://www.theguardian.com/news/2026/mar/26/ai-got-the-blame-for-the-iran-school-bombing-the-truth-is-far-more-worrying

AI got the blame for the Iran school bombing. The truth is far more worrying

LLMs-gone-rogue dominated coverage, but had nothing to do with the targeting. Instead, it was choices made by human beings, over many years, that gave us this atrocity

The Guardian
@ChrisMayLA6 I didn’t know Palantir, in addition to its many suspect operations and activities, was also involved in the development of targeting systems for the military. We certainly do live in interesting times.
@alex_p_roe @ChrisMayLA6 “optimising” the kill chain for latency is something I should have attributed to Palantir long time ago.

@ChrisMayLA6

The first casualty of war is truth.

Every f###king time! 😒

@ChrisMayLA6 Regardless of the method used to make a decision (including targeting during a war) you cannot duck the accountability - whether it is gut feel, Excel, AI or anything else the US Military is still accountable
@jschwa1 @ChrisMayLA6 I think this is one of the key problems with AI decision tools (also see self driving vehicles, and customer support chatbots that offer legally problematic advice). There's still a senior human that's legally accountable for the AI's decisions. I'm not convinced that governments and businesses are thinking this through. Unlike that low paid worker they fired to replace with AI, senior leaders can't shrug off mistakes and blame it on the worker.
@guigsy @ChrisMayLA6 Agreed. I fear that there will be some big AI failures, potentially leading to business/ societal failures that drive this home to senior leaders. However, for now, many appear to be blissfully ignorant 🙄
@jschwa1 @ChrisMayLA6 my general rule is to treat LLM based AIs as an over enthusiastic unpaid intern. It'll confidently agree to do anything and confidently produce output. But you really need to check it before the output goes anywhere. It does an 80% ok job... And recent releases of Gemini, ChatGPT, etc seem to be asymptotically levelling off on their accuracy. Like the current architecture will never get better than 90%. Which isn't really good enough when deciding which targets to bomb.
@guigsy @jschwa1 @ChrisMayLA6 This wasn’t a LLM through. Also, lots of people were involved after the AI they just didn’t question the result given.
@Kyebr @jschwa1 @ChrisMayLA6 ok, acknowledged. But it was still a black box algorithm with not enough oversight. Following the same mantra, that if you process fast enough, you'll net achieve more correct output... ignoring that you're also producing a higher volume of slop. A computer can't be accountable for the mistakes.
@guigsy @jschwa1 @ChrisMayLA6 I wasn’t defending LLM. Just pointing out that it’s not as simple as putting human in the loop. They have to actively question the assertion made by AI (or any computer in general)
@ChrisMayLA6 Thank you I have used the reference to the article to write to some people in NL

@ChrisMayLA6

Two thoughts arise from this.

First, the article says that "a chatbot did not kill those children. People failed to update a database, and other people built a system fast enough to make that failure lethal." Classic GIGO, but on steroids, and with lethal implications for scores of innocent schoolgirls.

Second, as the saying goes, a poor workman blames his tools. So maybe the primary problem isn't the tools, however suspect these particular ones are, but the workmen. (And, yes, I use the masculine advisedly.)

@alantperry

Oh yes, we need to find those people & hold them to account.... the tools are just (in the end) doing what they're told by their human deployers

@ChrisMayLA6 yeah, I've been saying this for quite some time now: #AI is being pushed down our throats because the only problem it really solves is culpability and accountability.

@ChrisMayLA6

Whether it was Claude, Maven, or some procedure that lacked adequate safeguards provides no cover to anyone.

You look before you shoot.

@ChrisMayLA6

In this context, also check out Ken Klippenstein’s articles published on February 26 and on March 3.

Trump Cheers Lethal Doxxing

State of the Union affirms national security state's greatest show on Earth

Ken Klippenstein
@ChrisMayLA6 @karanotlar Everyone seems to have memory-holed the lessons from the machine learning algorithm hype almost 2 decades ago, but today’s AI is basically the same thing, and it’s still “garbage in, garbage out.”