People refuse to acknowledge how an LLM actually works, and insist on assigning meaning and understanding to its output.

There is no meaning and the system understands neither the user’s question nor its own “response.” At least, not in the sense people expect from the conversational format of the interaction.

https://www.whodoyouthinkyouaremagazine.com/news/gemini-artificial-intelligence-the-national-archives-fake-records

“My ‘methodology’ was a series of errors”: Gemini generates false records and fake screenshots of TNA website | Who Do You Think You Are Magazine

The Gemini LLM generates fake records and screenshots from the UK National Archives, a family historian has revealed

Who Do You Think You Are Magazine

@mcnees

And this is not fixable. It is baked into how LLM's work.

Hallucinations are inevitable and unavoidable, and no amount of resources and layers of post-processing is going to be able to reduce them to any sort of acceptable level.

@EmilyGB2023 @mcnees

And furthermore : Hallucinations ARE the goal of the LLM training.