I think that you should read this article:

https://www.972mag.com/lavender-ai-israeli-army-gaza/

Read it carefully.

Note: This article is based substantially on anonymous sources verified by 972mag. Anyone who has done a lot of reading on Israel-Palestine has probably accumulated a list of sources they've decided not to trust; if 972 is on your mistrust list, consider this article by the Guardian, who independently reviewed the article's accounts and effectively cosigns the sources' validity.

https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai-database-hamas-airstrikes

‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza

The Israeli army has marked tens of thousands of Gazans as suspects for assassination, using an AI targeting system with little human oversight and a permissive policy for casualties, +972 and Local Call reveal.

+972 Magazine

"'Lavender’: The Israeli army has marked tens of thousands of Gazans as suspects for assassination, using an AI targeting system with little human oversight and a permissive policy for casualties"

A source both 972 and the Guardian cite asserts once this system had selected a target, a human would only spend about 20 seconds reviewing it. (972 states the only criteria were "is the target male" and whether the number of collateral casualties on a strike location would be over the IDF's limit.)

What I find myself asking is how much work the phrase "Artificial Intelligence" (and the widespread cultural belief that "AI" is a thing that exists) does to soften impact of these revelations. Imagine the central allegation of the above articles— that the "AI" program was actually selecting which persons to target for airstrikes— described without reference to "AI". Killing people because their "characteristics" fit a "statistical model" of a militant. Does calling this "AI" inform or obscure?
@mcc My instinctual guess is that "AI" carries enough connotation of logic and purpose that it masks the crudity of what's going on
@mcc I honestly cannot imagine a scenario in which this is anything but an attempt to launder and whitewash war crimes and genocide through the façade of algorithmic neutrality. "The purpose of a system is what it does."

@mcc

Or is the work done by slaves in another country, that have the choice of work, or become a target? :(

@BillySmith @mcc I suspect they don’t need to resort to human labor because they’re not at all interested in accuracy (“acceptable casualties” two orders of magnitude higher than the number of people you WANT to kill!?). Just names. /dev/urandom and a census would fit the billI. If anything, offshore workers who know details are a liability (as they would have detail about war crimes).
@mcc Ascribing it to AI makes it *more* horrific to my mind, but I'm probably not a representative example.
@mcc I don’t have a link on hand, but I think this is precisely a salient point that Timnit Gebru made at least two years ago (if not continuously since then): that is, that the magic smoke of “AI” serves as a way to bake in the bias without a human having to make that decision on their own (or be blamed for it later). The further the public can be convinced that “AI” exists, and specifically that the actors in question have it, the more successful the offloading of responsibility.

I’m reminded of the recent coverage of the corporate rent price collusion scandal where all the companies were using that service that would give “real time pricing”, and that one of the stated benefits is it would disallow agents who might be susceptible to empathy to hedge and lower the price. I ran into this actually, as I rented from one of those companies during the pandemic, and the rent price changed day by day and the agents had no power to override it.

What sickens me and makes me want to cry is knowing how far this “AI” fraud has been pushed in the last year and how even though certain people have been working hard to debunk it (DAIR), this is almost certainly just the first of such articles and things we’ll see. And none of the people who contribute to this, whether through propaganda (OpenAI, effective altruists, Microsoft, Google, Meta), direct funding (US government, Luckey Palmer, the Spotify CEO apparently and whoever the fuck developed Lavender), or whatever, will see a single lick of consequence for it.
@mcc I’ve asked myself this kind of question before sharing: « should I rephrase the title with “maths”, “automated statistics”, “computer programs/algorithms” ».
I think the “AI” word usage does not soften (because it’s still murder automation/robotization), but obscure (as always)
Post by bob, @[email protected]

this is also a problem with ad targeting (though thankfully ad targeting doesn't kill people). ad buyers (the equivalent of analysts) have to say they care about targeting but what they care about more is reach. in order for an ad campaign to be worthwhile it has to reach a minimum number of peo...

feed.hella.cheap

@mcc this is… precisely how Obama used to approve drone strikes?

There were physical characteristics and things like “basically all males over 15 are enemy combatants”, then they made lists and executed people. It’s not new (I’m not saying it’s not horrifying, I just want to throw another tidbit in for the war criminal who won the Nobel peace prize for nuclear disarmament, then cutting a country to the wind that did exactly that in exchange for promises of defense. I digress)

@jason @mcc iirc "bug splats" was the term the US military used(uses?) for the count of people murdered that weren't specific targets.

"AI" tools are used for all sorts of horrifying things in our legal system these days too.

@aeva oh absolutely. Sentencing guidelines, recidivism estimates, etc. Pretty gross.
@aeva @jason @mcc for anyone who reads these replies and feels they can skip the article: don't. the story is not that they're using machine learning, it's how.
@relsqui @jason @mcc I did not skip the article.
@mcc for me the idea that somebody somewhere gets a pop up they look at for less than a minute and then hit a button that kills people, feels so fucking divorced from what it is to be a human to me. the fact that an 'ai' is picking the targets is an extra layer of horrific on top of that, but ultimately it was a human who chose to use ai for this either because they believe in ai, or they simply don't care that it will kill civilians (in fact, probably makes it easier to cover up)
@mcc From the way they describe it it looks like machine learning, just collect a lot of data on people along with markers for known "terrorists" by their classification and ask it to identify more terrorists. This xkcd seems apropos: https://xkcd.com/1838/
Machine Learning

xkcd

@mcc That is an amazingly good point, and I'm not sure if it informs or obscures (or a bit of both).

It reminds me of when I attended a talk by a famous professor who works on androids. I got the chance to ask him at the end what is the ethics question he's most concerned about.

His answer was that he wanted to stop people from thinking robots are logical/fair. He wanted daily robots, like vending machines, to make mistakes, to give us wrong change from time to time, to keep us on our toes.

@mcc Marvel made an entire movie (which earned nearly a billion dollars) about how horrifying this is, and it _still_ turned out to be less horrifying than the real life implementation.
@mcc obviously, it's marketing intended to mystify and obscure and exculpate human decision makers. this is the highest-level, most grandiose assault on society that the big firms are engaged in: they want to control and monetize the mechanisms of policy itself, everywhere - government, military, private, etc.
@mcc if unchallenged, in the back half of this decade they will ramp up the "AGI theater" to further obscure what they're actually offering: a computer program (doesn't even matter what it does, how it does it, or how well it works) that gives the person who uses it whatever rationales they need to kill, oppress, and exploit people.
@mcc they are obviously already doing that. but "AGI" is this even more implacable layer of bullshit, and presented with the right theatrics they basically have a giant box with "the greater good" written on it they can point to as millions die from climate collapse etc.
@mcc Calling this or any other statistical program, "AI" deceives.
@mcc I think that I read some foreshadowing of this after the Targeting Directorate was first publicized after its 2019 establishment.
The tech bros (8200) went over the head of the traditional, pixel pinching folk that were in charge of targets.
Because the traditional way is people-heavy and 'inefficient'. It takes time to train someone to be able to look at three pixels and say "yep, that's the guy" with life-or-death levels of human certainty. A machine does not feel guilt.
@mcc depends on the reader, but ultimately it informs about an ongoing attempt of obfuscation at the chain of command and subsequent responsibilities

@mcc obscures for sure. What kind of AI? Is it a large language model or some kind of classification model?

Lately any statistical model is being called AI in tech marketing land.
It sounds to me like they’re using a statistical models to guess at assasination targets. (Which is a piss poor and very dangerous way to get justification for killing someone, obviously)

@mcc back in December, a similar story made the rounds but at the time the name was “The Gospel”. I wonder if they renamed it or it’s a different one. https://www.npr.org/2023/12/14/1218643254/israel-is-using-an-ai-system-to-find-targets-in-gaza-experts-say-its-just-the-st

And I assume it considers anywhere there are Palestinians to be a target.

I hope they have a massive data leak so we can see what it’s really doing.

@Laukidh This is specifically discussed in both articles.
@mcc ah yep I missed the one sentence reference to it in the guardian article