So again...we're ok with autonomous AI in these scenarios because 'that's what the enemy will do', or the rule of law, or something, even though one of the big AI innovators, maybe the big one when it comes to actual technical chops, says it's definitely not ready for that and he's now unintentionally in a pissing contest with an ex-cable news host?

I mean, you know it's not in his business interest to publicly say it's not ready, and engage in this stand off, and he's worried enough to be doing it anyway.

Just checking...

https://www.newscientist.com/article/2516885-ais-cant-stop-recommending-nuclear-strikes-in-war-game-simulations/

AIs can’t stop recommending nuclear strikes in war game simulations

Leading AIs from OpenAI, Anthropic and Google opted to use nuclear weapons in simulated war games in 95 per cent of cases

New Scientist