I don't like the term "hallucinations" when we talk about AI. Sure, LLMs can get things wrong, but a hallucination is an error in perception, and you can't have an error in perception when there's no one there to perceive. The only hallucinations that are happening are on your side of the keyboard.

@maxleibman I was thinking about this just today when someone was talking about AI "hallucinations." (They were kind enough to put it in scare quotes.) I couldn't think of a better term, though.

Perhaps "fabrication" would work, but then everything an LLM does is a fabrication. It just so happens that some of its fabrications correspond with reality. So to be precise it might have to be called something like "Inaccurate fabrications." That's not very catchy, though.

@bodhipaksa @maxleibman

The term I like is "bullshitting", which I got from https://undark.org/2023/04/06/chatgpt-isnt-hallucinating-its-bullshitting/

See also https://thebullshitmachines.com/ , an expansion on this idea into a small course.

ChatGPT Isn’t ‘Hallucinating.’ It’s Bullshitting.

Opinion | Artificial Intelligence models will make mistakes. We need more accurate language to describe them.

Undark Magazine