Generative AI doesn’t have a coherent understanding of the world: Researchers show that even the best-performing large language models don’t form a true model of the world and its rules, and can thus fail unexpectedly on similar tasks. https://news.mit.edu/2024/generative-ai-lacks-coherent-world-understanding-1105
Despite its impressive output, generative AI doesn’t have a coherent understanding of the world

Large language models can achieve incredible performance on some tasks without having internalized a coherent model of the world or the rules that govern it, MIT researchers find. This means these models are likely to fail unexpectedly if they are deployed in situations where the environment or task slightly changes.

MIT News | Massachusetts Institute of Technology

@nixCraft

In other news...

Water is wet.

🙄🙄

@lupus_blackfur To say that something is wet means that the water on the surface of that something can be removed.
Fire burns things right? But it isn’t in and of itself burned.
Source: https://youtu.be/PrrdFvXu1-o (not a Rick roll)
(I’m playing here, not being a jerk. Have a nice day!)
Water's Not Wet | Songify THIS!

YouTube