RE: https://infosec.exchange/@hacks4pancakes/116192434654015384

The only use case for AI is culpability laundering.

The US military has infinite resources and could have hired infinite people to draw up target lists, and they could have made a mistake in those target lists. Military error is not unique to AI, even if it intensifies it and mechanizes it. Previously they would have blamed bad intel, fog of war, but in any case that would be an admission of culpability and error that resides within the military.

Notice how the mere existence of AI serves to launder culpability here: by refusing to confirm or deny the use of AI in targeting, we are left to imagine a vast unknowable cybernetic military such that AI and humans can no longer be disentangled. The creation of a sense of "it is impossible to know" is the product. If they did use Claude to target bombs, you get the literal deflection of culpability - AI did it, not us, but even if they didn't, the amorphous integration of AI into military systems renders the same result: the AI may not have picked the targets, but it did provide the intel, hired the analysts, and so on.

#USPol

Laundering culpability works in both directions: externally, to us, but also internally, to the people within the machine. It generates a new category of casualty, algorithmic casualties, that are immediately normalized. The machine can function by providing a plausible lie to the people that operate it, that they are just doing what the provided tools told them to do.
Along a different dimension, culpability laundering works for both the users and rentiers of AI. AI companies can sell a product that is both miraculously accurate and completely fallible. There is no boundary around the guaranteed function aside from "everything," so failure to perform "everything" can't be considered a bug - it is both an unerring god and a smol bean language model. The bet that the AI bubble in no small part rests on is the bet that legal precedent will shake out in such a way that protects AI companies against harms caused by their models, at which point everything becomes legal for corporations and nothing is legal for humans. It is a bet that capital can come to own reality itself.
If you're here to say "nothing is new and everything has always been this way" you're too late everyone got here ahead of you

@jonny

Really appreciate this framing, it's what I was reaching toward with Artificial Authority:

"Artificial Intelligence is not a cohesive tool...but rather than a technology, AI signifies a particular vie for power that notably incurs upon the domain of erudition, by pirating the language of intelligence and consciousness and the actions of sense making. This is an attempt to alienate authority toward something that cannot be held to account - to create something of a higher power."

Bruder & Rudmann - Artificial Authority