People refuse to acknowledge how an LLM actually works, and insist on assigning meaning and understanding to its output.

There is no meaning and the system understands neither the user’s question nor its own “response.” At least, not in the sense people expect from the conversational format of the interaction.

https://www.whodoyouthinkyouaremagazine.com/news/gemini-artificial-intelligence-the-national-archives-fake-records

“My ‘methodology’ was a series of errors”: Gemini generates false records and fake screenshots of TNA website | Who Do You Think You Are Magazine

The Gemini LLM generates fake records and screenshots from the UK National Archives, a family historian has revealed

Who Do You Think You Are Magazine

@mcnees this here is maybe the biggest issue with #GenAI :

“I don’t pay to subscribe. Often it’s just to get quick answers such as links to archived journals or asking for historical context.”

IT CANNOT PROVIDE CONTEXT. Humans MUST provide the context, otherwise it is almost guaranteed that the output will be pure BS.

That said, you can't fault this person for "using it wrong" because the services themselves actually encourage people to use #LLM technology wrong.

@msh @mcnees

THAT'S the problem indeed. Google and Chat GPT *advertise* their LLM as super advanced search engines. We can't blame people for believing these things are super advanced search engines.

We can educate and tell people around us that these things aren't reliable, but we can't reach everyone.