People refuse to acknowledge how an LLM actually works, and insist on assigning meaning and understanding to its output.

There is no meaning and the system understands neither the user’s question nor its own “response.” At least, not in the sense people expect from the conversational format of the interaction.

https://www.whodoyouthinkyouaremagazine.com/news/gemini-artificial-intelligence-the-national-archives-fake-records

“My ‘methodology’ was a series of errors”: Gemini generates false records and fake screenshots of TNA website | Who Do You Think You Are Magazine

The Gemini LLM generates fake records and screenshots from the UK National Archives, a family historian has revealed

Who Do You Think You Are Magazine
To make matters worse, the same is true of the breakdown generated by the LLM when asked to explain what it did. It may be entirely true. It may be hallucinated in part or in whole. It’s tempting to take it at face value because it sounds plausible. You cannot know for sure, it’s not a real answer.
@mcnees pretty sure you can get an llm to apologise for a right answer as easily as a wrong one
@ASprinkleofSage You can, and I make my students do this to illustrate the difference between getting an answer and being served something shaped like an answer.
@mcnees Exactly. But these models have been set up to use features of normal human interactions, such using first-person pronouns in its output, using language that refers to its “feelings” (such as “sorry”, or “excited”, or “happy to”), and other tricks to force the illusion that the LLM is having a conversation like a human.
@michaelgemar @mcnees designed to manipulate humans, agreed
@mcnees agreed. I used Gemini heavily in 2025. I put it thru the ringer. like I was interrogating how it "thought" and what it knew and why. I eventually concluded that there was no "there" there and that yes it was a stochastic parrot or casino machine.
@mcnees @physics It’s irrelevant if an LLM is sentient, or “understands“ anything. It’s only important that it sufficiently simulates it.
@markloundy That's the issue, right? (Or one of them.) Sufficient emulation is one reason people fail to properly interrogate the outputs!

@mcnees this here is maybe the biggest issue with #GenAI :

“I don’t pay to subscribe. Often it’s just to get quick answers such as links to archived journals or asking for historical context.”

IT CANNOT PROVIDE CONTEXT. Humans MUST provide the context, otherwise it is almost guaranteed that the output will be pure BS.

That said, you can't fault this person for "using it wrong" because the services themselves actually encourage people to use #LLM technology wrong.

@msh @mcnees

THAT'S the problem indeed. Google and Chat GPT *advertise* their LLM as super advanced search engines. We can't blame people for believing these things are super advanced search engines.

We can educate and tell people around us that these things aren't reliable, but we can't reach everyone.

@mcnees

And this is not fixable. It is baked into how LLM's work.

Hallucinations are inevitable and unavoidable, and no amount of resources and layers of post-processing is going to be able to reduce them to any sort of acceptable level.

@EmilyGB2023 @mcnees

And furthermore : Hallucinations ARE the goal of the LLM training.

@mcnees I'm trying to compose a list of the people I interact with professionally, that I need to ask "Do you use LLMs in your daily work?" so I know to double check their work. I can't believe people use this stuff for actual jobs. It's like using a magic 8 Ball for cancer diagnosis.