RE: https://infosec.exchange/@hacks4pancakes/116192434654015384

The only use case for AI is culpability laundering.

The US military has infinite resources and could have hired infinite people to draw up target lists, and they could have made a mistake in those target lists. Military error is not unique to AI, even if it intensifies it and mechanizes it. Previously they would have blamed bad intel, fog of war, but in any case that would be an admission of culpability and error that resides within the military.

Notice how the mere existence of AI serves to launder culpability here: by refusing to confirm or deny the use of AI in targeting, we are left to imagine a vast unknowable cybernetic military such that AI and humans can no longer be disentangled. The creation of a sense of "it is impossible to know" is the product. If they did use Claude to target bombs, you get the literal deflection of culpability - AI did it, not us, but even if they didn't, the amorphous integration of AI into military systems renders the same result: the AI may not have picked the targets, but it did provide the intel, hired the analysts, and so on.

#USPol

Laundering culpability works in both directions: externally, to us, but also internally, to the people within the machine. It generates a new category of casualty, algorithmic casualties, that are immediately normalized. The machine can function by providing a plausible lie to the people that operate it, that they are just doing what the provided tools told them to do.
Along a different dimension, culpability laundering works for both the users and rentiers of AI. AI companies can sell a product that is both miraculously accurate and completely fallible. There is no boundary around the guaranteed function aside from "everything," so failure to perform "everything" can't be considered a bug - it is both an unerring god and a smol bean language model. The bet that the AI bubble in no small part rests on is the bet that legal precedent will shake out in such a way that protects AI companies against harms caused by their models, at which point everything becomes legal for corporations and nothing is legal for humans. It is a bet that capital can come to own reality itself.
If you're here to say "nothing is new and everything has always been this way" you're too late everyone got here ahead of you

@jonny

Really appreciate this framing, it's what I was reaching toward with Artificial Authority:

"Artificial Intelligence is not a cohesive tool...but rather than a technology, AI signifies a particular vie for power that notably incurs upon the domain of erudition, by pirating the language of intelligence and consciousness and the actions of sense making. This is an attempt to alienate authority toward something that cannot be held to account - to create something of a higher power."

@jonny Also, when a company can't get away with using an unsupervised LLM, they'll try to have ablative employees / interns to supervise and take blame.
@jonny "i was just following AI's orders"
@jonny Regarding your last point above, this reminds me of the plot of a certain Black Mirror episode (CW: spoilers|depressing): https://en.wikipedia.org/wiki/Men_Against_Fire
Men Against Fire - Wikipedia

@jonny Yes but also: https://en.wikipedia.org/wiki/Weapons_of_Math_Destruction

That can be and has been done for years-to-decades even without LLMs.
The new capability now (which isn't insignificant) is how much easier and more broadly it can be done.

Weapons of Math Destruction - Wikipedia

@gaditb
Really, there has been a technology that one could pose any question to and receive plausible-seeming responses propped up by billions of capex capable of laundering culpability for both user and purveyor for decades now? I must have missed that and failed to consider how nothing was new except scale

@jonny There was, it just had a strong piece of it that was a social technology and wasn't purely technical.

It was "outsourcing to a private company which advertizes expertise to develop an opaque proprietary automated bureauocratic tool".

Look at COMPAS.
https://en.wikipedia.org/wiki/COMPAS_%28software%29
More info about its history:
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

Each templated question (e.g. "How likely will ___ re-offend if released?") required long time and a lot of development, advertizing, and corporate capture to bring it into use,
but absolutely:

COMPAS (software) - Wikipedia

@jonny
(a) gave plausible-seeming responses (an opaque numerical answer that often fit biases)
(b) was and is propped up by I-assume-billions-of-capex (this is my weakest claim, but I don't think the particular scale of money uses in advertizing and defending it is your most core argument)
(c) launders culpability for the user -- as far as I know no court has been held liable for using it, and it is still in use
(d) launders culpability for the purveyor -- Northpointe is still in business and has as far as I know never been charged either for the harm caused nor even for false advertizing after it was shown to be both racially biased and less accurate than group consensus of people completely lacking amy expertise

@jonny I don't want to say that there's nothing new here. Quantity and speed have a massively significant quality of their own, and it means that we cannot necessarily plan to frame and fight this new development the same way as before.

But so, this is not entirely unprecedented -- there are cases of the same playbook being run in the same ways before that we can learn from and use the knowledge gained to fight this significant threat better.

@jonny One thing that imo it teaches is that the specific technology, LLMs here, and its specific mechanisms and affordances might be less significant than the framing and opacity it is allowed in its social presentation.

Notably, black-box analysis of the COMPAS algorithm (despite it de facto being a part of many states' sentencing laws and practices, it is proprietary and protected from civil inspection) shows that it is probably a very simple and very describable, comprehensible, and dissectable (racist) iirc-possobly-even-linear regression model.

It was the /opacity/ that gave it its laundering power, not any inherent internal /complexity/.
This lines up with other cases where opacity allowed people to project deeper meaning and trust, e.g. the ELIZA bot.

@gaditb
That's sort of exactly what I'm saying. But the two are not possible to decouple. The framing is possible because of the technology. Prior generation quant policing is still possible to interrogate as a model that is designed for a specific aim, and its possible to articulate a resistance to a particular application and deployment of a narrowly focused product. Its not so possible to articulate an opposition to the magic everything juice sloshing as lubricant through the whole machine. I dont find "its just more of qualitatively the same thing" to be true or useful for understanding what's happening.

@jonny So, whether "it's significantly more (and therefore has new implications) of nevertheless qualitatively the same thing" is true feels to me like splitting hairs -- which I will gladly do if you're interested, I like it, but tends to frustrate people --

but I do think it is /useful/.

In particular, I think a tactic that was found to work for public outreach [citation needed] for previous fights which I believe were similar,
was a two-pronged approach [again, citation needed] that

@jonny (a) contextualized the usage and configuration of the algorithms/ai/models, to make it clear that both building and using them are human/organizational decisions made by particular people, attacking the attempt to assign agency, and therefore responsibility/blame, to the model itself thst should go to the decision-makers,
and
(b) removed the mystique and hazy claims of authority from the model through combinations of accurate reporting, black-boxing and analyzing results, and reverse-engineering from the output. That attacks the ability to appeal to some real knowledge or authority encoded in the model itself,

and between the two of them -- at least in the court of public opinion, and sometimes [citation needed] in policy responses -- recenters the laundered responsibility back to those responsible.

@jonny The particular commonality here that I think I see, that is why I think this comparison is worth bringing up, is that in comparison to actual money laundering the sort of laundering happening here is INCREDIBLY shallow. Like, you absplutely don't have to hand it to money launderers, but when tracing their laundering you have to bring in forensic accounting, trace subtle flows of cash, credit, goods, debts, obligations, and gambling. It's a multi-layered maze.
Whereas this -- this is just the people doing the thing and then putting on an LED-bedazzled mask and arguing The Robot Did It.

And I think that shallow laziness can have tactical implications for opposing it, and I think there are other previous cases of similar ahallowness that we can look at for inspiration.

@jonny "The mere existence of AI". Utterly 🔥 observation.

@jonny

“AI serves to launder culpability”

… indeed.

@jonny The automation involved (including #AI) and the number of people involved in targeting make it very difficult to assign ethical responsibility. This is the “ethical distance of killing.” Consider the following. It’s old information (‘00s) but probably still true.

A human commander makes the decisions on target selection based on their staff recommendations, doctrine, plans and so on. AI is almost certainly involved with providing some of the info and recommendations the commander uses to decide. The commander gives the order to launch. Someone else relays that order to yet another person who pushes the launch button. The missile is on its way — but WAIT, did you know these weapons can be retargeted in flight? The changes may come from an entirely different part of the chain of command. The missile arrives in the target vicinity. It then uses terminal guidance to steer itself to “the target”. This can be involve AI as well, and it might choose among several possible targets.

So who is responsible for causing the deaths when a missile explodes? The causal chain is quite long. The ethical responsibility for killing in war is ambiguous, and that makes it so much easier for people to do it.
#ethics

@jonny

If they did use Claude to target bombs, you get the literal deflection of culpability - AI did it, not us,

It acted under your orders, you're in for everything it did. Sucks to be you.

@jonny

There's also spam, I remember some study looking at how well AI integration went for companies and the ones who had some really good results were the ones who sent spam and could now generate plausible looking walls of meaningless text a lot more easily.

@gbargoud
Spam is culpability laundering: it displaces the origin of the spam from the creator, and creates the background expectation of spam that justifies its use in any particular context. Nobody is responsible for spam, it just "is," thanks to AI.

@gbargoud @jonny

Signal jamming.

That's the primary function of generative "AI". Flooding the zone with botshit.

@jonny And “gifting” it to kids and schools is akin to Nestle “gifting” baby formula.
@jonny People have been using machines to displace blame since long before “AI”. How many people who kill people with automobiles get away with it? How much crime gets done via corporations, or bureaucracy? Using an intermediate to deflect blame is as old as civilization.
@8r3n7
"nothing is new and nothing is worth saying" sure is a useful critical position.
@jonny That’s not what I said. Good criticism requires being specific.
@8r3n7
Yes, and equating the deflection of blame from driving a car to the deflection of blame from autonomous weapons systems is good criticism and specific
@jonny yes, but lets be honest, this is just another shitty excuse for abdication of responisibility, the leaders are simply letting themselves off the hook for murder and other crimes, they ought to be held accountable, and the folks who are obligated to do so are also abdicating thier responsibilities, it is a moral crisis/catastrophe and most of the people in the center of empire are so buffered from the consequences of the crimes, they simply do not care enough... been going on for a while actually, the french socialists did exactly zero to help the victims of empire in indochina, ho chi minh called it out, but then they actually fought back, revolutionized warfare, eliminated imperialism, and in the process, the american military industrial complex figured out how to turn failure into success, and its still going on today right now in iran, the whole game is predicated on the us dollar being the global reserve currency, so once de-dollarization sets in, the MIC and then imperialism will have to be contained and eradicated by global cooperation, education, and diplomacy. Its a long road ahead but anarcho-solarpunk-communism is gonna win in the end.
@rubixhelix
That may be so, but I still often find it useful to talk about one thing at a time even if similar things have happened elsewhere and everything exists in a continuity of history.
@jonny yeah man, i am with you, i just be like that
@rubixhelix
All good, the world is like that too :)
@jonny getting milage from this meme...

@jonny

AI’s primary function is to consolidate power. But when a machine makes the decision it has a benefit of shielding culpability.

@GhostOnTheHalfShell
"Use case" =/= "function" because yes of course that's it's purpose.

@jonny

Fair enough, I have other use cases in mind none of them good

A computer can never be held accountable

This legendary page from an internal IBM training in 1979 could not be more appropriate for our new age of AI. A computer can never be held accountable Therefore a …

Simon Willison’s Weblog