People refuse to acknowledge how an LLM actually works, and insist on assigning meaning and understanding to its output.

There is no meaning and the system understands neither the user’s question nor its own “response.” At least, not in the sense people expect from the conversational format of the interaction.

https://www.whodoyouthinkyouaremagazine.com/news/gemini-artificial-intelligence-the-national-archives-fake-records

“My ‘methodology’ was a series of errors”: Gemini generates false records and fake screenshots of TNA website | Who Do You Think You Are Magazine

The Gemini LLM generates fake records and screenshots from the UK National Archives, a family historian has revealed

Who Do You Think You Are Magazine
To make matters worse, the same is true of the breakdown generated by the LLM when asked to explain what it did. It may be entirely true. It may be hallucinated in part or in whole. It’s tempting to take it at face value because it sounds plausible. You cannot know for sure, it’s not a real answer.
@mcnees pretty sure you can get an llm to apologise for a right answer as easily as a wrong one
@ASprinkleofSage You can, and I make my students do this to illustrate the difference between getting an answer and being served something shaped like an answer.