RE: https://infosec.exchange/@hacks4pancakes/116192434654015384
The only use case for AI is culpability laundering.
RE: https://infosec.exchange/@hacks4pancakes/116192434654015384
The only use case for AI is culpability laundering.
The US military has infinite resources and could have hired infinite people to draw up target lists, and they could have made a mistake in those target lists. Military error is not unique to AI, even if it intensifies it and mechanizes it. Previously they would have blamed bad intel, fog of war, but in any case that would be an admission of culpability and error that resides within the military.
Notice how the mere existence of AI serves to launder culpability here: by refusing to confirm or deny the use of AI in targeting, we are left to imagine a vast unknowable cybernetic military such that AI and humans can no longer be disentangled. The creation of a sense of "it is impossible to know" is the product. If they did use Claude to target bombs, you get the literal deflection of culpability - AI did it, not us, but even if they didn't, the amorphous integration of AI into military systems renders the same result: the AI may not have picked the targets, but it did provide the intel, hired the analysts, and so on.
If they did use Claude to target bombs, you get the literal deflection of culpability - AI did it, not us,
It acted under your orders, you're in for everything it did. Sucks to be you.