AIs can’t stop recommending nuclear strikes in war game simulations

Leading AIs from OpenAI, Anthropic and Google opted to use nuclear weapons in simulated war games in 95 per cent of cases

New Scientist
@petergleick because of John Nash's game theory applied to the cold war nuclear balance of terror. Nash was suffering from paranoid schizophrenia at the time and later came to realise that real life was much more complicated than his equations.
@petergleick the "rational" option is always to strike first. See also the "rational" economic theories of Hayek, von Mises, Friedman and other charlatans.

@chogbro @petergleick The "AI"used in this experiment are large language models. There is no reasoning, there is no logic, there is no rational actor.

There is only a statistically probable collection of words combined to produce a plausible-sounding response.

@clickhere @petergleick yes, and and the LLMs are "trained" on the same bullshit as the silicon Valley broligarchs

@chogbro @petergleick It's ever-decreasing circles

(and US national security / defence policy now appears to be based on it..)