Here, in a nutshell, is why using AI for military purposes is worse than dangerous:

If you ask it for a plan to attack something, it will never respond "that's bad idea, don't do it."

@petergleick you keep in mind that attacks cost resources and resources are crucial in war, so an AI trained for military use will keep that in mind and will say when it's a bad idea. The Collateral Murder scandal among countless others did show that humans, US in this case, don't need AI to hunt Reuters journalists for shits n giggles.
@petergleick if you really must agitate against AI, please, not on ChatGPT level, so here is a thought for you for free: AI will take many things into account of an attack, value of target, collateral damage, reputation loss, ...but in defense also reputation *gain* when the enemy leads successful attacks and danger to own troops by such attacks. hard to teach the public how dangerous the enemy is if not a single of their rockets gets through. I'm more worried that AI will be *way* too efficient

@NeussWave @petergleick An LLM isn't a real AI. It's not doing the thinking you're doing to decide which things are relevant.

We did not make any big breakthroughs in *real* AI, we just made Stochastic parrots.

@NeussWave @petergleick @gooba42

See, this is what I mean by "mythos"

"Stochiastic parrot" is right up there with "Strawberry" and number of fingers.

Energy Language Model is a thing (kona.1) but you'd never know about it dancing around the fire with the other woodfolk.

@n_dimension @NeussWave @petergleick It's literally just a very large autocomplete generator. It doesn't think. It doesn't have any epistemology, it's neither true nor false because it doesn't have those values in it.

"We don't know how it works but it does! Maybe magic?" - The appeal to ignorance doesn't make your argument true.

We do know how it works.

@gooba42 @NeussWave @petergleick

"We do know how it works."
No, no you dont.
What's more, you are incurious to find out, stuck in your "hahaha strawberry" loop.

I reiterate.
"Energy Language Model is a thing (kona.1) but you'd never know about it dancing around the fire with the other woodfolk."

Stop your dance for a moment and find out.

@n_dimension @NeussWave @petergleick Where's the peer reviewed, published proof? Both that it's a real thing *and* that it's being used by the US government who's been actively bragging that they're using LLMs.

You want to make claims, you'll back them with evidence.

@gooba42 @NeussWave @petergleick

The point was that you are incurious and repeating the same nonsense about "stochiastic parrots" because you do not want to learn the forbidden knowledge of AI tech.

As to your Pentagon procurment process question, I am unware of their supply chain, and frankly I suspect neither are you, beyond what is in open press.

Here are the peer-reviewed Energy Based Models literature.

"You want to make claims, you'll back them with evidence."

Educate yourself, you are welcome.

Peer Reviewed

Du, Y. & Mordatch, I. (2019). Implicit Generation and Modeling with Energy-Based Models. Advances in Neural Information Processing Systems (NeurIPS).

Carbone, D., Hua, M., Coste, S. & Vanden-Eijnden, E. (2023). Efficient Training of Energy-Based Models Using Jarzynski Equality. Advances in Neural Information Processing Systems (NeurIPS).

Schröder, T., Ou, Z., Li, Y. & Duncan, A. (2024). Energy-Based Modelling for Discrete and Mixed Data via Heat Equations on Structured Spaces. Advances in Neural Information Processing Systems (NeurIPS).

Li, Z., Chen, Y. & Sommer, F.T. (2023). Learning Energy-Based Models in High-Dimensional Spaces with Multiscale Denoising-Score Matching. Entropy, 25(10), 1367.

Additional References

Hinton, G.E. & Sejnowski, T.J. (1986). Learning and Relearning in Boltzmann Machines. In Parallel Distributed Processing: Explorations in the Microstructure of Cognition. MIT Press.

LeCun, Y., Chopra, S., Hadsell, R., Ranzato, M.A. & Huang, F.J. (2006). A Tutorial on Energy-Based Learning. In Predicting Structured Data. MIT Press.

@n_dimension @NeussWave @petergleick I do know what LLMs are doing. They *are* Stochastic parrots. Your faith in unproven tech is really off-putting because it suggests you have ulterior motives.

@gooba42 @NeussWave @petergleick

You are literally an idiot Urzl.
Likely an illiterate idiot because you asked for Energy Based Models scientific references, 4 papers and 2 textbooks later yet you still have not bothered to understand what an Energy Model is since you are still quoting "Stochiastic parrot" nonsense.

A proof that you can lead an ass to water and they will die of thirst.

Go back to the forest folk.
I am relieving you from pretending you know jack shit about AI.