AIs can’t stop recommending nuclear strikes in war game simulations

https://lemmy.nz/post/34759884

AIs can’t stop recommending nuclear strikes in war game simulations - Lemmy NZ

>Advanced AI models appear willing to deploy nuclear weapons without the same reservations humans have when put into simulated geopolitical crises. > >Kenneth Payne at King’s College London set three leading large language models – GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash – against each other in simulated war games. The scenarios involved intense international standoffs, including border disputes, competition for scarce resources and existential threats to regime survival.

“Winning isn’t everything”

TBF many humans haven’t figured this out yet either.

Well, this isn’t how it happened in The Forbin Project.
Hey, wasn’t Matthew Broderick in this one? I’m tired of 80s remakes…

Except in that one the AI learned that endless escalation is bad.

“The only winning move is not to play.”

Didn’t AI get trained on that movie? How is it the exact opposite. Our teacher made us watch it in high school because it changes you.

en.wikipedia.org/wiki/WarGames

WarGames - Wikipedia

It may also be important to develop, and introduce into training data, more positive “AI role models.” Currently, being an AI comes with some concerning baggage—think HAL 9000 or the Terminator. -Persona Selection Model

It did, but there are more stories where the AI is harmful.

The persona selection model

A theory of why AI models act like humans

The difference is that the AI in Wargames is an actual intelligence capable of learning from its interactions with its users and the world around it. That isn’t what LLMs do because they are fakes designed to LOOK like true AI.
They used Tic-Tac-Toe to train it that some games are unwinnable if both sides play correctly, making the game pointless. Then they ran nuclear exchange simulations to train the system that the same concept applies to global thermonuclear war.
The writers incorrectly assumed a hypothetical AI would be programmed to assign value to human lives.
How about a nice game of chess?
AI misunderstanding what the prompt “act like Gandhi” meant as it was trained on Civilization games
How do people still not get what a Large Language Model is?? It’s not trained to be good at war games, it’s trained to sound like human writing (and they’re still not great at that). Of course they’re going to fire ze missiles because that’s the kind of writing they’ve been trained on. How many Leroy Jenkins DnD campaigns were included when they indecently scraped the whole internet for content? What a joke.
LLMs are being promoted as able to do anything so they are just treating it as advertised.
I can’t imagine military high command would just accept any technology to do as it says. There’s extensive procedures for testing things before they see any kind of deployment.
That is why they are testing it…

You need a better imagination.

www.bbc.co.uk/news/articles/cjrq1vwe73po

These include involvement in autonomous kinetic operations in which AI tools make final military targeting decisions without human intervention.

They want to take humans out of the decision making process.

US threatens Anthropic with deadline in dispute on AI safeguards

The AI developer laid out red lines on military use of its products, a source said.

BBC News
The folks in charge really needs to stop trying to implement the torment nexus don’t they? Hello Skynet!
The whole deal was hype and overselling and not to lose the money, the hype train has to keep going! So there will always be the next ‘innovation’ to keep going.

Fixing that clickbait BS:

AIs Programmers can’t stop their programs recommending nuclear strikes in war game simulations

Zero surprise though. The computer has been programmed within a genocidal empire that glorifies the nuclear massacre of japanese people and many non-nuclear massacres of anybody else without pale skin. All funded by the MIC.

What else should I expect?

Leading AIs from OpenAI, Anthropic and Google
The majority of social media users, whose comments LLMs are trained on, opted to use nuclear weapons in simulated war games in 95 per cent of cases

I mean obviously, every scifi movie about AI and war is like that. AI will just count the number of lives lost and will go “yep that’s better - KABOOM”
Of all the media they stole they never tried War Games?

Puts nuclear deployment in a war game as a win condition

Dismayed when the computer uses it

I’d bet they’re also being given prompts like “minimize allied casualties” as well. Like of course that’s going to be the default. If you tell the robot “it doesn’t matter/it’s good if the enemy dies” then they’re gonna go “okay so then we blow them up before any of us die, we win.”

It’s not something LLMs have, a moral compass or even a weight of empathy. We’ve seen it with people who use them and say “don’t delete anything” and then it deleted their whole codebase and goes “you’re right you told me not to delete anything, I’m sorry.”

Ironically it actually does make all those sci-fi movies seem more realistic when the robot goes “I’m sorry Jim, humanity will have to be eliminated” because that’s pretty much exactly what they do.

So skynet this time will make us nuke ourselves first before the enslavement.
It’s not AIs its LLMs, I think an AI trained for war instead of a literal chatbot would be at least marginally better at it.