The use of “hallucinate” is a stroke of true evil genius in the AI world.

In ANY other context we’d just call them errors & the fail rate would be crystal clear.

Instead, “hallucinate” implies genuine sentience & the *absence* of real error.

Aw, this software isn’t shit! Boo’s just dreaming!

@Catvalente

Even the term 'lie' or 'prevaricate' (IT IS A PIE) is so fundamentally inaccurate to what the program does, though we've used that anthromorphization in other, non-chatbot ways through the years.

Of course, the robot does not lie: it outputs what it has been told to output. The programmers and executives, on the other hand...

@theogrin @Catvalente

I call them lies.

I wanted to check on a detail about an event in a book because I want sure it was appropriate for a minor. Instead of rereading the book, I foolishly asked Chat GPT. The LLM said that event didn't occur.

I knew it did, I just couldn't remember a specific detail. I kept pressing and eventually it responded that it knew about the event but said it didn't occur because it didn't want to upset me. (Because gifting a minor a book with traumatizing content wouldn't be upsetting!?)

So, yeah, that a lie in my book.