People refuse to acknowledge how an LLM actually works, and insist on assigning meaning and understanding to its output.

There is no meaning and the system understands neither the user’s question nor its own “response.” At least, not in the sense people expect from the conversational format of the interaction.

https://www.whodoyouthinkyouaremagazine.com/news/gemini-artificial-intelligence-the-national-archives-fake-records

“My ‘methodology’ was a series of errors”: Gemini generates false records and fake screenshots of TNA website | Who Do You Think You Are Magazine

The Gemini LLM generates fake records and screenshots from the UK National Archives, a family historian has revealed

Who Do You Think You Are Magazine
@mcnees @physics It’s irrelevant if an LLM is sentient, or “understands“ anything. It’s only important that it sufficiently simulates it.
@markloundy That's the issue, right? (Or one of them.) Sufficient emulation is one reason people fail to properly interrogate the outputs!