RE: https://infosec.exchange/@hacks4pancakes/116192434654015384
The only use case for AI is culpability laundering.
RE: https://infosec.exchange/@hacks4pancakes/116192434654015384
The only use case for AI is culpability laundering.
The US military has infinite resources and could have hired infinite people to draw up target lists, and they could have made a mistake in those target lists. Military error is not unique to AI, even if it intensifies it and mechanizes it. Previously they would have blamed bad intel, fog of war, but in any case that would be an admission of culpability and error that resides within the military.
Notice how the mere existence of AI serves to launder culpability here: by refusing to confirm or deny the use of AI in targeting, we are left to imagine a vast unknowable cybernetic military such that AI and humans can no longer be disentangled. The creation of a sense of "it is impossible to know" is the product. If they did use Claude to target bombs, you get the literal deflection of culpability - AI did it, not us, but even if they didn't, the amorphous integration of AI into military systems renders the same result: the AI may not have picked the targets, but it did provide the intel, hired the analysts, and so on.
Really appreciate this framing, it's what I was reaching toward with Artificial Authority:
"Artificial Intelligence is not a cohesive tool...but rather than a technology, AI signifies a particular vie for power that notably incurs upon the domain of erudition, by pirating the language of intelligence and consciousness and the actions of sense making. This is an attempt to alienate authority toward something that cannot be held to account - to create something of a higher power."
Full text here: https://bruderrudmann.org/2025/10/17/artificial-authority.html
@jonny Yes but also: https://en.wikipedia.org/wiki/Weapons_of_Math_Destruction
That can be and has been done for years-to-decades even without LLMs.
The new capability now (which isn't insignificant) is how much easier and more broadly it can be done.
@jonny There was, it just had a strong piece of it that was a social technology and wasn't purely technical.
It was "outsourcing to a private company which advertizes expertise to develop an opaque proprietary automated bureauocratic tool".
Look at COMPAS.
https://en.wikipedia.org/wiki/COMPAS_%28software%29
More info about its history:
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
Each templated question (e.g. "How likely will ___ re-offend if released?") required long time and a lot of development, advertizing, and corporate capture to bring it into use,
but absolutely:
@jonny I don't want to say that there's nothing new here. Quantity and speed have a massively significant quality of their own, and it means that we cannot necessarily plan to frame and fight this new development the same way as before.
But so, this is not entirely unprecedented -- there are cases of the same playbook being run in the same ways before that we can learn from and use the knowledge gained to fight this significant threat better.
@jonny One thing that imo it teaches is that the specific technology, LLMs here, and its specific mechanisms and affordances might be less significant than the framing and opacity it is allowed in its social presentation.
Notably, black-box analysis of the COMPAS algorithm (despite it de facto being a part of many states' sentencing laws and practices, it is proprietary and protected from civil inspection) shows that it is probably a very simple and very describable, comprehensible, and dissectable (racist) iirc-possobly-even-linear regression model.
It was the /opacity/ that gave it its laundering power, not any inherent internal /complexity/.
This lines up with other cases where opacity allowed people to project deeper meaning and trust, e.g. the ELIZA bot.
@jonny So, whether "it's significantly more (and therefore has new implications) of nevertheless qualitatively the same thing" is true feels to me like splitting hairs -- which I will gladly do if you're interested, I like it, but tends to frustrate people --
but I do think it is /useful/.
In particular, I think a tactic that was found to work for public outreach [citation needed] for previous fights which I believe were similar,
was a two-pronged approach [again, citation needed] that
@jonny (a) contextualized the usage and configuration of the algorithms/ai/models, to make it clear that both building and using them are human/organizational decisions made by particular people, attacking the attempt to assign agency, and therefore responsibility/blame, to the model itself thst should go to the decision-makers,
and
(b) removed the mystique and hazy claims of authority from the model through combinations of accurate reporting, black-boxing and analyzing results, and reverse-engineering from the output. That attacks the ability to appeal to some real knowledge or authority encoded in the model itself,
and between the two of them -- at least in the court of public opinion, and sometimes [citation needed] in policy responses -- recenters the laundered responsibility back to those responsible.
@jonny The particular commonality here that I think I see, that is why I think this comparison is worth bringing up, is that in comparison to actual money laundering the sort of laundering happening here is INCREDIBLY shallow. Like, you absplutely don't have to hand it to money launderers, but when tracing their laundering you have to bring in forensic accounting, trace subtle flows of cash, credit, goods, debts, obligations, and gambling. It's a multi-layered maze.
Whereas this -- this is just the people doing the thing and then putting on an LED-bedazzled mask and arguing The Robot Did It.
And I think that shallow laziness can have tactical implications for opposing it, and I think there are other previous cases of similar ahallowness that we can look at for inspiration.
@jonny The automation involved (including #AI) and the number of people involved in targeting make it very difficult to assign ethical responsibility. This is the “ethical distance of killing.” Consider the following. It’s old information (‘00s) but probably still true.
A human commander makes the decisions on target selection based on their staff recommendations, doctrine, plans and so on. AI is almost certainly involved with providing some of the info and recommendations the commander uses to decide. The commander gives the order to launch. Someone else relays that order to yet another person who pushes the launch button. The missile is on its way — but WAIT, did you know these weapons can be retargeted in flight? The changes may come from an entirely different part of the chain of command. The missile arrives in the target vicinity. It then uses terminal guidance to steer itself to “the target”. This can be involve AI as well, and it might choose among several possible targets.
So who is responsible for causing the deaths when a missile explodes? The causal chain is quite long. The ethical responsibility for killing in war is ambiguous, and that makes it so much easier for people to do it.
#ethics
If they did use Claude to target bombs, you get the literal deflection of culpability - AI did it, not us,
It acted under your orders, you're in for everything it did. Sucks to be you.
There's also spam, I remember some study looking at how well AI integration went for companies and the ones who had some really good results were the ones who sent spam and could now generate plausible looking walls of meaningless text a lot more easily.
💥 BOOM 💥
PEBCAK
AI’s primary function is to consolidate power. But when a machine makes the decision it has a benefit of shielding culpability.
Fair enough, I have other use cases in mind none of them good