Very good and quite long read in a language sometimes hard to understand - at least for me.
So, I decided to generate with NotebookLM an infographic out of it, which summarizes the most important aspects from my POV.
#Palantir #Maven #Iran

https://artificialbureaucracy.substack.com/p/kill-chain @adfichter https://infosec.exchange/@adfichter/116284168629855430

@bodomenke Say "with AI", so we can assume the summary is bullshit.
Checking for the "1,000 Targets per Hour" statistic, it turns out, it's not from 2026, but from 2024, and not related to the Iran war.

@Kraemer_HB The infographic cleary states, that is is created by NotebookLM. However, I put it now in the post as well. Thanks for bringing it up.

On the matter: The infographic doesn't state that the goal of 1.000 targets per hour is defined for 2026 Iran war. As the underlying text, the infographic states that this goal was already defined in 2024, as one target per 72 seconds for each of 20 people is 1.000 target per hour. (1/2)

@yala @rhandos

I believe it is reasonable & healthy to assume that there are for sure failures in AI generated content, but I suggest, you should leave stereotypes about AI aside, and do better checking facts & figures before categorizing something as “bullshit”. ;-)

FYI: I did the calculation before posting the Infographic - and I did it just now once again. (2/2)
@Kraemer_HB @yala @rhandos

@bodomenke @yala @rhandos Thanks for revisiting. I didn't calculate, I only wondered initially why the measure 1000 per hour is used and not the same measure as for 2024.
That's misleading (it's all the same to an AI).

I took other aspects as most important from the article: With a faster "Kill Chain", life and death decisions are reduced to numbers. They failed horribly in 2003, too. And the AI does that perfectly today, invisibly destroying any accountability in military decision making.