The use of “hallucinate” is a stroke of true evil genius in the AI world.

In ANY other context we’d just call them errors & the fail rate would be crystal clear.

Instead, “hallucinate” implies genuine sentience & the *absence* of real error.

Aw, this software isn’t shit! Boo’s just dreaming!

@Catvalente in all fairness, I think that these really are not errors, stemming from the fact that LLMs don't have any concept of what truth (or really anything) is and therefore there is no implicit metric with which to rank the responses in this regard. Due to the way that they just statistically predict what might fit in the next blank spot, "hallucination" actually describes the underlying process better 🤔
@Catvalente basically, saying an LLM made an error would be analog to a situation where you place an important decision on a coin flip and the coin lands on the "wrong" side and proclaiming that the coin made an error.