I think that you should read this article:

https://www.972mag.com/lavender-ai-israeli-army-gaza/

Read it carefully.

Note: This article is based substantially on anonymous sources verified by 972mag. Anyone who has done a lot of reading on Israel-Palestine has probably accumulated a list of sources they've decided not to trust; if 972 is on your mistrust list, consider this article by the Guardian, who independently reviewed the article's accounts and effectively cosigns the sources' validity.

https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai-database-hamas-airstrikes

‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza

The Israeli army has marked tens of thousands of Gazans as suspects for assassination, using an AI targeting system with little human oversight and a permissive policy for casualties, +972 and Local Call reveal.

+972 Magazine

"'Lavender’: The Israeli army has marked tens of thousands of Gazans as suspects for assassination, using an AI targeting system with little human oversight and a permissive policy for casualties"

A source both 972 and the Guardian cite asserts once this system had selected a target, a human would only spend about 20 seconds reviewing it. (972 states the only criteria were "is the target male" and whether the number of collateral casualties on a strike location would be over the IDF's limit.)

What I find myself asking is how much work the phrase "Artificial Intelligence" (and the widespread cultural belief that "AI" is a thing that exists) does to soften impact of these revelations. Imagine the central allegation of the above articles— that the "AI" program was actually selecting which persons to target for airstrikes— described without reference to "AI". Killing people because their "characteristics" fit a "statistical model" of a militant. Does calling this "AI" inform or obscure?
@mcc I don’t have a link on hand, but I think this is precisely a salient point that Timnit Gebru made at least two years ago (if not continuously since then): that is, that the magic smoke of “AI” serves as a way to bake in the bias without a human having to make that decision on their own (or be blamed for it later). The further the public can be convinced that “AI” exists, and specifically that the actors in question have it, the more successful the offloading of responsibility.

I’m reminded of the recent coverage of the corporate rent price collusion scandal where all the companies were using that service that would give “real time pricing”, and that one of the stated benefits is it would disallow agents who might be susceptible to empathy to hedge and lower the price. I ran into this actually, as I rented from one of those companies during the pandemic, and the rent price changed day by day and the agents had no power to override it.

What sickens me and makes me want to cry is knowing how far this “AI” fraud has been pushed in the last year and how even though certain people have been working hard to debunk it (DAIR), this is almost certainly just the first of such articles and things we’ll see. And none of the people who contribute to this, whether through propaganda (OpenAI, effective altruists, Microsoft, Google, Meta), direct funding (US government, Luckey Palmer, the Spotify CEO apparently and whoever the fuck developed Lavender), or whatever, will see a single lick of consequence for it.