AI hallucinations are inevitable.
Duh.
Next week’s revelation?
Not just “Industry evaluation methods made the problem worse”, but industry _valuation_ methods made the problem worse.
Next up, studies in self-deception, mass deception, advertising, hype, propaganda, and believing your own and ill-considered bullshit.

OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws
In a landmark study, OpenAI researchers reveal that large language models will always produce plausible but false outputs, even with perfect data, due to fundamental statistical and computational limits.