LLMs don't hallucinate or lie, they ‘bullshit’, in the sense that the late philosopher Harry Frankfurt (https://en.m.wikipedia.org/wiki/Harry_Frankfurt) defined it, explain Glasgow researchers in their recent paper: https://link.springer.com/content/pdf/10.1007/s10676-024-09775-5.pdf

It's crucial to replace phrases like ‘hallucinate’ or ‘lie’ with a word like ‘bullshit’. This is not to try and be witty. The wording, they say, shapes how investors, policymakers and general public think of these tools. Which in turn impacts the decisions they make about them.

Harry Frankfurt - Wikipedia

“The problem here isn't that large language models hallucinate, lie, or misrepresent the world in some way. It's that they are not designed to represent the world at all; instead, they are designed to convey convincing lines of text.”

The paper details Frankfurt's interesting distinction between ‘soft bullshit’ and ‘hard bullshit’, reasoning that ChatGPT is definitely the former and in some specific cases the latter.

@hdv
The words hallucinate and lie humanifies LLMs - that's how the companies behind the systems like to frame them. They are not. So bullshit is a very good word for it imho.
@LinHead yup that's the point the paper makes
@hdv I read it and i think we should widely publish this. Almost none of the press gets it right at this moment. LLMs won't solve any problems, other kinds of so called ai might help with some.
The owners of the LLMs make problems.
Right on point:
https://youtu.be/TtVJ4JDM7eM
Honest Government Ad | AI

Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.

YouTube
@hdv @LinHead Probably Frankfurt had only humans in mind when writing about bullshitting. The authors repeatately make the point that LLMs does not care about anything, they do argue that LLMs bullshitt nonetheless.
They try to bridge that gap: ChatGPT produces texts independently, convincing readers of the truthiness of its statement. Therefore it bullshits. Interesting! Though I am not 100% convinced of that argument.
@erikp @hdv
LLMs are far from independent in the words sense. They produce alone but from trained data and following at least some algorithm. And the producers try to make them look like humans by training them say things like sorry or have a nice day.
You might have a point with Frankfurt meaning humans - I still have to read the full text.