This is a long piece from the Guardian, but I found it worth reading. It's a broad history of military targeting, chains of decision-making, and how "AI" is just a wrapper being blamed for a bigger Palantir project to speed up the process of targeting and eliminate any chance of double-checking in the moment.

But importantly, the piece doesn't treat this as something new either but cites similar targeting clusterfucks in 2003 and even back to the Vietnam war era.

https://www.theguardian.com/news/2026/mar/26/ai-got-the-blame-for-the-iran-school-bombing-the-truth-is-far-more-worrying

AI got the blame for the Iran school bombing. The truth is far more worrying

LLMs-gone-rogue dominated coverage, but had nothing to do with the targeting. Instead, it was choices made by human beings, over many years, that gave us this atrocity

The Guardian
Even though this is the most specific and horrific application of the attempt to speed up these decisions, parts are broadly applicable to our work, specifically (quote in next post)
"In his 1995 book Trust in Numbers, the historian of science Theodore Porter argued that organisations adopt quantitative rules not because numbers are more accurate but because they are more defensible. Judgment is politically vulnerable. Rules are not. The procedure exists to make discretion disappear, or seem to. The system’s actual flexibility lives entirely in this unacknowledged interpretive work, which means it can be removed by anyone who mistakes it for inefficiency."