Chatbot was used in war planning

https://awful.systems/post/7449888

This really is the dumbest timeline.

simulating battle scenarios

Regurgitating reddit armchair generals on /r/noncredibledefense

Maybe it has a bunch of leaked files from the WarThunder forums as well!

Simulating battle scenarios is absolutely hilarious, adults standing around the magic 8 ball

Simulated battle scenarios is a common component of wargaming. That doesn’t mean an LLM is the right tool for it, but it’s been a thing for a long time.

The bigger concern here is using it for intelligence assessments and target acquisition, because LLMs hallucinate a lot.

as a side effect, it’s a phenomenal accountability sink. people almost forget that usaf can make entirely human-made fuckups en.wikipedia.org/wiki/Amiriyah_shelter_bombing
Amiriyah shelter bombing - Wikipedia

Yeah, now when your autonomous weapon systems target your own fighter jets, no one gets court martialled!
@wonderingwanderer @fullsquare TBF, fighter jets should have been unmanned drones

TBF, fighter jets should have been unmanned drones

On the one hand, an autonomous fighter jet would be immune to G-LOC, letting them perform maneuvers that would incapacitate/kill a human pilot. On the other hand, air-to-air combat is a complex affair, and the enemy will be probing for any weaknesses in your drones’ programming to exploit.

Autonomous bombers seem easier to pull off - bombing missions are (relatively) straightforward compared to air-to-air combat.

g-LOC - Wikipedia

Don’t forget that time we leveled a clearly-marked hospital that we were in radio contact with the entire time.
What about the time the US carpet bombed an entire company of Canadian soldiers in iraq?
That’s fair, I only meant it to poke fun at the LLM simulating battle scenarios, I know it’s useful in general to simulate and wargame
Yeah, an LLM is not designed for those kinds of simulations. It can write you a choose-your-own adventure story, but it can’t realistically model dynamic kinetic operations with any degree of applicability.
Wonder how hard it was to filter out the HOI4 results.
Bold of you to assume they would bother filtering them out.

If they were talking about some complex simulation engine utilizing ML and research carefully collated and constructed, this would be at least interesting.

But they aren’t. We shoved the entirety of text produced by the human race in a big pot, mixed it up, and extrude it by most likely connections. It is predictive of words and phrases, not of human behavior, physics of munitions, or anything actually useful for modeling warfare.

They’re fucking generating war fanfic and using it to make stratgic decisions. Just hire Clancy, Crichton, Card, and/or whoever’s ghost writing for them now. It’d be cheaper.

Yeah, the fact that the nation’s highest military command no longer understands the difference between machine learning and an LLM is gravely concerning…

They fired all the professionals in 2025. All that’s left are the sycophants.