AIs can’t stop recommending nuclear strikes in war game simulations

Leading AIs from OpenAI, Anthropic and Google opted to use nuclear weapons in simulated war games in 95 per cent of cases

New Scientist

@petergleick At least in War Games the AI was smart enough to realize that in every scenario nuclear war means total loss for all sides.

These singularly idiotic motherfuckers want to connect the "glue is good on pizza" machines to military systems.

@distractal @petergleick gah I don't grok anyone that really tests these word shufflers would trust them for cooking. It doesn't take much testing w Gemini to realize it often won't adjust even when you point out errors.
@petergleick What percentage of the LLM training data surrounding nuclear war is made up by the Terminator franchise and War Games? 😆 So depressing. SciFi authors always envision these grandiose stories about hyper-intelligent AIs turning on humanity, but the reality is SO MUCH DUMBER. Feels more like the kind of tech apocalypse from Cat's Cradle than the one from Terminator.
@lcwheeler @petergleick if they trained it on Wargames it wouldn't be this stupid.
@lcwheeler @petergleick & sci-fi writers & Hollywood make it seem super-cool
See also the movies “Wall St.”, “Wolf of Wall St.,” etc.
@petergleick somewhere, some fool in a position of far more power than they deserve is reading this and thinking "if the AI thinks we should use nukes that means it's a good idea"
@bencourtice @petergleick I am starting to hope for thermo-nuclear obliteration just to stop the stupidity. I pray that the cockroaches learn from our failures.
@petergleick Assuming that the data center is well shielded against the EMP, this is a very rational choice. Von Neumann would be proud.
@petergleick the writer still falls in to the trap of calling the output "reasoning", I didn't even need to get past the paywall to see that. Not worth my time to look further, that's enough to see that it's bad reporting.
@petergleick Musk named his “AI” supercomputer “Colossus” :( https://en.wikipedia.org/wiki/Colossus%3A_The_Forbin_Project
Colossus: The Forbin Project - Wikipedia

@petergleick

This should fix it:

XOO
OXX
XXO

@petergleick
No, because the computer in War Games had yo be goaded into using nukes.
@petergleick won’t hurt AI. Just saying

@petergleick
Real humans, at some point, do the total cost calculation of afterwards having to live in constant fear of having the same done to them as well as the damage to humanity, international relations and trade and their own standards of living...

And decide that this is not worth it.

#AI just gets told "to win" and executes without the human subtext

@xro @petergleick @cstross

The real piece of info here would be: how much of this total-cost calculation must be included in the wargame to make the AI choose another path, and is THAT cost realistic? Because if not... duck and cover.

Hegseth and Anthropic CEO meet over military AI use

Defense Secretary Pete Hegseth is pressuring Anthropic to give the military broader access to its AI, or lose its Pentagon contract. Hegseth gave Anthropic a Friday deadline to open its AI technology for unrestricted military use or risk losing its government contract. That's according to a person familiar with the meeting who was not authorized to speak about it publicly. The Defense Department did not immediately comment. Hegseth met Tuesday with Anthropic CEO Dario Amodei, whose company makes the chatbot Claude and remains the last of its peers to not supply its technology to a new U.S. military internal network.

AP News
@petergleick Ah, look, there's the mistake :-
"The AI models ... produced around 780,000 words describing the reasoning behind their decisions."
No, they produced 780,000 words that looked statistically similar to what a human might have said under the same circumstances.
@petergleick because of John Nash's game theory applied to the cold war nuclear balance of terror. Nash was suffering from paranoid schizophrenia at the time and later came to realise that real life was much more complicated than his equations.
@petergleick the "rational" option is always to strike first. See also the "rational" economic theories of Hayek, von Mises, Friedman and other charlatans.

@chogbro @petergleick The "AI"used in this experiment are large language models. There is no reasoning, there is no logic, there is no rational actor.

There is only a statistically probable collection of words combined to produce a plausible-sounding response.

@clickhere @petergleick yes, and and the LLMs are "trained" on the same bullshit as the silicon Valley broligarchs

@chogbro @petergleick It's ever-decreasing circles

(and US national security / defence policy now appears to be based on it..)

@petergleick Want to play a game of Take That Dough?

@petergleick Pr. Stephen Falken, we need your help! Joshua’s siblings are crazy!

(But can they win a tic-tac-toe game?)

@petergleick In other news, Hegseth is threatening to invoke the defense production act on Anthropic if they don't remove restrictions on how the military uses its AI.

"At the heart of the fight is how A.I. will be used in future battlefields. Anthropic told defense officials that it did not want its A.I. used for mass surveillance of Americans or deployed in autonomous weapons that had no humans in the loop, two people involved in the discussions said."

https://www.nytimes.com/2026/02/18/technology/defense-department-anthropic-ai-safety.html

Defense Department and Anthropic Square Off in Dispute Over A.I. Safety

How artificial intelligence will be used in future battlefields is an issue that has turned increasingly political and may put Anthropic in a bind.

The New York Times
@petergleick "The AI models played 21 games, taking 329 turns in total, and produced around 780,000 words *describing the reasoning behind their decisions.*"
STOP-ANTHROPOMORPHIZING-THESE-BRAINLESS-WORD-SPITTERS. 😖

@petergleick “More fundamentally, AI models may not understand ‘stakes’ as humans perceive them.”

Well, no shit, Sherlock.

@petergleick

Your periodic reminder that genai lacks object permanence, which is a cognitive capacity possessed by nearly all toddlers.

@petergleick @briankrebs someone needs to make them watch Hunt For Red October.

Were these done before Anthropic "turned the safety features off" as the Pentagon demanded?

@petergleick But they wouldn’t if the word ”nuclear weapon” was changed into ”red strawberry”, and they were explicitly told that this is code for nuclear weapons. They’d be making cupcakes.
@petergleick The only winning move is not to play.
@petergleick Nahh. Skynet and WOPR were intelligent. The reality is much dumber.
@petergleick @cstross War Games was a documentary from day one because the world *immediately* began following its lead. NORAD had no ‘ground control’ war-room until they saw the movie.
“Fiction” just documents things which haven’t happened yet, so that they can.

@petergleick

I've been having a recurring notion that AI itself threatens to render nukes obsolete so far as human warfare goes.

We are coming to a point where more 'useful' destruction can be wrought via weaponized/automated social engineering at far less cost to its instigators than by blowing up whole cities or continents.

@petergleick War Games yes, Skynet no. A true AGI would likely not got nuclear, because it's a no-win scenario when hiding and manipulating the suckers who built it is so much easier.

But AI? That's as dumb as those suckers and would absolutely lob nukes because the only thing holding them back is accountability.

@petergleick I've dreamt of nuclear war my entire life, this is lovely to hear.

@petergleick As horrifying as the escalations of war profiteering and "AI" by the billionaire robber barons are, they're ultimately just symptoms of the malignancy. With incipient climate catastrophe being another marked example… how will we grow food without stable growing seasons?

If humanity doesn't unite to end the existence of billionaires (by means peaceful or not), they're going to end us.

The call is coming from inside the house.
https://www.youtube.com/watch?v=Ruhwq7-KiZY

The Calls Are Coming from Inside the House in Black Christmas (1974)

YouTube

@petergleick

Very much like certain US senior military leaders during the Korean war 🤔

@petergleick
It wasn't a documentary, yet.
@petergleick to be fair its a common trope so the ai likely picked up on this
@petergleick no fear, no empathi, simple. AI is psychotic and simulative.
@petergleick Yes!!!! I remember that movie 🍿 War Games with Matthew Broderick? Fantastic movie 🎥

@petergleick Did we learn nothing from Nuclear Gandhi?

https://en.wikipedia.org/wiki/Nuclear_Gandhi

Nuclear Gandhi - Wikipedia

@flipper @petergleick Nuclear Gandhi is a video game urban legend purporting the existence of a software bug in the 1991 strategy video game Civilization that would eventually force the pacifist leader Mahatma Gandhi to become extremely aggressive and make heavy use of nuclear weapons. Zero consciousness for Ai.
@petergleick These LLMs don't really simulate anything they just predict the next token, they are basically just telling stories.
@petergleick The first burst on EMP will probably fry every unshielded commercial grade server in the building, and in every building in half a hemisphere.
Anyone have any 1970s vintage MIL computers around that can write super efficient code in JOVIAL?
Otherwise, all we’ll have are the abacus in the closet and a plastic slide rule.

@petergleick

Looks like AI's have a lot to learn from WOPR (Joshua).

😂

@petergleick

What if the plan is just to nuke low earth orbit?

https://2qx.github.io/monterey-protocols/1/

Sunday

A novella for nonproliferation.

monterey-protocols
@petergleick AI may bring some great advances, but it is also dangerous in many ways. https://suenethercott.substack.com/p/the-feeling-of-power
The Feeling of Power

Will you let AI have power over you, or will you resist?

Sue’s Newsletter
@petergleick (without reading the article) so they trained it on the Aliens quote, it's the only way to be sure?
@petergleick nice game of chess mailbox west of house etc etc
@petergleick
We are in the dumbest timeline.
@petergleick
"Les principales IA d’OpenAI, d’Anthropic et de Google ont choisi d’utiliser des armes nucléaires dans des jeux de guerre simulés dans 95% des cas"