RE: https://infosec.exchange/@hacks4pancakes/116192434654015384

I have been watching this story simmer for several days. I've been wary of it. It fits too neatly into the criticisms and warnings many of us have been raising. But it's starting to look like, yes, they are using an LLM to make critical decisions.

At the same time I have heard parts of speeches from the US Secretary of War.* I have been dismayed by his shallow thinking. It doesn't help that his speeches also sound like they are also composed by an LLM.

*formerly Defense

His speeches sound LLM generated because they are filled with so many cliches and empty sentences. And because they sound nothing like they way he speaks extemporaneousnessly. Of course this could just be a bad speech writer. Or it could be that when your ideas are bad no words can fix it.

The Department of War will not even say if they missed or not. Did they *want* to hit that school? If not the school then what?

@futurebird I think they took a page from Israel’s book and let AI choose the targets with no human oversight. Which in itself says a lot about their moral reasoning.
@complexmath @futurebird As someone pointed out, in this case, it's not about the technology per se, it's that they don't care to even think about who they're murdering.