AIs can’t stop recommending nuclear strikes in war game simulations— Leading AIs from OpenAI, Anthropic and Google opted to use nuclear weapons in simulated war games in 95% of cases

https://lemmus.org/post/20439606

I seriously don’t understand how anyone would expect any other outcome. It has a goal - to win, or not to lose. What is the logical way to have the highest probability of winning? Use strongest weapon. You wouldn’t expect it to tell you how to build a rain catchment and filter system when you tell it your thirsty.

It has a goal - to win, or not to lose.

Its model doesn’t include the long term consequences of a nuclear strike because it’s core mission isn’t to preserve human life.

Same reason you don’t see AIs constantly interjecting the need to cut carbon emissions or redistribute private wealth or demilitarize as a solution for resolving conflicts.

This isn’t what the machines were built to do.

They are trained to achieve goals. If your goal ist to win a war, but also not kill anyone… its incompatible.