I think that you should read this article:

https://www.972mag.com/lavender-ai-israeli-army-gaza/

Read it carefully.

Note: This article is based substantially on anonymous sources verified by 972mag. Anyone who has done a lot of reading on Israel-Palestine has probably accumulated a list of sources they've decided not to trust; if 972 is on your mistrust list, consider this article by the Guardian, who independently reviewed the article's accounts and effectively cosigns the sources' validity.

https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai-database-hamas-airstrikes

‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza

The Israeli army has marked tens of thousands of Gazans as suspects for assassination, using an AI targeting system with little human oversight and a permissive policy for casualties, +972 and Local Call reveal.

+972 Magazine

"'Lavender’: The Israeli army has marked tens of thousands of Gazans as suspects for assassination, using an AI targeting system with little human oversight and a permissive policy for casualties"

A source both 972 and the Guardian cite asserts once this system had selected a target, a human would only spend about 20 seconds reviewing it. (972 states the only criteria were "is the target male" and whether the number of collateral casualties on a strike location would be over the IDF's limit.)

What I find myself asking is how much work the phrase "Artificial Intelligence" (and the widespread cultural belief that "AI" is a thing that exists) does to soften impact of these revelations. Imagine the central allegation of the above articles— that the "AI" program was actually selecting which persons to target for airstrikes— described without reference to "AI". Killing people because their "characteristics" fit a "statistical model" of a militant. Does calling this "AI" inform or obscure?

@mcc this is… precisely how Obama used to approve drone strikes?

There were physical characteristics and things like “basically all males over 15 are enemy combatants”, then they made lists and executed people. It’s not new (I’m not saying it’s not horrifying, I just want to throw another tidbit in for the war criminal who won the Nobel peace prize for nuclear disarmament, then cutting a country to the wind that did exactly that in exchange for promises of defense. I digress)

@jason @mcc iirc "bug splats" was the term the US military used(uses?) for the count of people murdered that weren't specific targets.

"AI" tools are used for all sorts of horrifying things in our legal system these days too.

@aeva @jason @mcc for anyone who reads these replies and feels they can skip the article: don't. the story is not that they're using machine learning, it's how.
@relsqui @jason @mcc I did not skip the article.