#AI can’t stop recommending nuclear strikes in #wargame simulations
Leading AIs from #OpenAI, #Anthropic and #Google opted to use #nuclearweapons in simulated war games in 95% of cases
The scenarios involved intense international standoffs, including border disputes, competition for scarce resources and existential threats to regime survival.
https://www.newscientist.com/article/2516885-ais-cant-stop-recommending-nuclear-strikes-in-war-game-simulations/

What could go wrong?

AIs can’t stop recommending nuclear strikes in war game simulations

Leading AIs from OpenAI, Anthropic and Google opted to use nuclear weapons in simulated war games in 95 per cent of cases

New Scientist
Anthropic ditches its core safety promise in the middle of an AI red line fight with the Pentagon

Anthropic, a company founded by OpenAI exiles worried about the dangers of AI, is loosening its core safety principle in response to competition.

CNN