Please, please stop supporting gen AI companies. It’s not worth it. https://futurism.com/artificial-intelligence/pentagon-ai-claude-bombing-elementary-school
Pentagon Refuses to Say If AI Was Used to Select Elementary School as Bombing Target

Did the US military use Anthropic's Claude to select targets in its weekend operations in Iran, with devastating results?

Futurism
Phew, that spun people up on BlueSky

@hacks4pancakes
I wonder if that decision to use AI in targetting was based on "Yes, use AI, it's perfect and can't do mistakes!!" or rather "We know that AI makes wrong decisions and combining this with live ammunition costs innocent lives. But it's not our civilians who get blown up, so we don't care. Acceptable is good enough!"-Irresponsibity.

Or in simple terms: Are they ignorantly stupid or just cruel?

@momo @hacks4pancakes

That one's really hard because they have shown themselves to be both so many times

@momo @hacks4pancakes The leadership is swayed by the AI $$$. And they're too stupid to conider anything else. At work we've been testing a Claude-based system to remediate mssing alt-text in ebooks, and the results would not make me confident you could use it for targeting. In fact, the opposite of that.

@momo @hacks4pancakes

Third option:
They're cruel and murderous on purpose (Israel is committing genocide in Gaza and the US supports it after all) and initially planned to use AI as plausible denial for PR reasons, but don't because it doesn't make it look any better. Worse, in fact.

@momo @hacks4pancakes The latter.
They'll be using this to test all kinds of diabolical fuckery.

@hacks4pancakes

Congrats for the second post in that screen shot, I think the point you made there is not stated often enough.

I really hate how LLM took over the general term "AI", that has been used in CS research for dozens of different approaches.

@mndflayr I drink and I know something

@hacks4pancakes
Its the opposite for me. I drink and I stop knowing things. 😋

@mndflayr

@hacks4pancakes I don’t, and that probably has brought me in the permanent state that Terry Pratchett once called “knurd”. -5/10 can’t recommend
@hacks4pancakes non-sequitur: that glass is sweating buckets. You must still be in Singapore.

@hacks4pancakes

To be fair, it's really easy to spin people up on Bluesky.

I don't know of any truly left-leaning people who support AI.

@hacks4pancakes

We just need to give better training to the LLMs and make them wear body cams

AI safety - Wikipedia

@CosmickTrigger Of course domestic LLMs, OpenAI/Anthropic/Gemeni/etc, have qualified immunity. We _strongly_ recommend against using Deepseek...
@hacks4pancakes
@hacks4pancakes and the smug "military targeters would probably have made the same mistake" makes me think of Chief Wright (Stonekettle Station) who did targeting during the first Gulf War and talked about sleepless nights double-checking intelligence to make sure they didn't kill civilians, and called off attacks when they couldn't be sure, who would have been personally devastated if they learned a missile they launched had killed school-kids. The military structure at the time allowed him to care about such things, but Secretary Kegseth doesn't seem to care or have a soul as far as I can tell.
@hacks4pancakes
I wish that would be an energy source…
@hacks4pancakes on the other side of things, does it provide them less liability if Claude makes the targets? Or more? Possibly shifts liabilities away from admin to AI devs. I mean it shouldn't but I can see them arguing that. Administration's love scapegoats.
@hacks4pancakes and they wonder how people get radicalised
@hacks4pancakes I've been angry about this every single day this week. Multiple times, even. "This is so good at coding!!! we're gonna get left behind!!" no it isn't, no you aren't, they are using it to fucking kill people, which should never have been a thing you should have had to consider because frankly you already should have rejected the usage of LLMs due to the eight million other ethical and moral problems with them.
@hacks4pancakes
And stop blaming AI. Blame the people who decided to use AI and blame those who blindly accepted its output without checking it.
@jwi @hacks4pancakes Both are bad. One for giving bad information, the other for using it without checking it.
@jwi @hacks4pancakes the people selling the software that was involved in that decision constantly claim it is a sentient being capable of making better-than-human decisions so yes, we will absolutely "blame AI" as part of the larger effort to hold humans (both the ones dropping the bombs and the ones providing them with the support to do so) accountable.
@jplebreton @hacks4pancakes
Agreed. I see it as the same as a tradesman blaiming their tools. The human is still responsible.

@jwi @jplebreton @hacks4pancakes the argument is the speed at which evil and incompetence can be inflicted.

No tool, it takes longer, there are more opportunities to intervene. With a tool people appear to worship like the arc of the covenant, opportunities for near real-time horrors untold.

@hacks4pancakes
Just ask yourself:
If they didn't - would they still deny it?
@hacks4pancakes
Safe bet there's a contractual clause saying the Pentagon is not allowed to explain which manufacturer's "product" they used.
@hacks4pancakes Could not agree more. And so long as these companies are solely blamed, and not the Commander-in-Chief of the United States military, that's another victory for DJ "bone spurs" T.
@hacks4pancakes
As a 97B, I was taught to always recheck any form of electronic or image intelligence. During WWII we "read" a flyover photo of people lined up to enter a Nazi gas chamber as people lined up to enter a mess hall.
@hacks4pancakes I assume you have read this or at least heard rumours of similar? https://www.972mag.com/lavender-ai-israeli-army-gaza/
‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza

The Israeli army has marked tens of thousands of Gazans as suspects for assassination, using an AI targeting system with little human oversight and a permissive policy for casualties, +972 and Local Call reveal.

+972 Magazine
Claude AI has selected over 1,000 targets in the US-Israeli war against Iran

Anthropic's CEO has said nothing as the military uses his company's technology to wage an illegal war, while OpenAI grants the Pentagon functionally unrestricted access to ChatGPT.

World Socialist Web Site
@hacks4pancakes An AI system would select to send a second bomb 40 mins after the first, to maximise deaths of rescuers.
That is what happened, a 40 min second bomb.

@hacks4pancakes

"Ai" is not something that popped up in the last 3 years.
Ai was being integrated into combat systems for the last decade+.

I know because I was active in #StopKillerRobots movement and no one was interested.