@PlasmaGryphon
I'm not saying this is *good*. I'm just saying this isn't a useful example of AI hallucination.
Google has long (and without help from AI) conflated "popular" (which the Onion certainly is) with "authoritative" (which the Onion certainly isn't).
@mattblaze @PlasmaGryphon Is there some specific definition of "AI hallucination" that you are referencing? Because it seems that the main difference between this case and other cases is just that it's easier to localize and identify where the algorithm picked up the wrong information.
Just because a different algorithm _also_ makes this mistake doesn't really distinguish this LLM screwup from other LLM screwups.
@mattblaze @gregtitus @PlasmaGryphon
But the original Onion piece was not mislabled: it was correctly labelled ONION, which most humans can figure out. The LLM lost that information, thus changing the label from JOKE to an implicit "I'M TELLING YOU THIS IS TRUE".
Losing that information has everything to do with how LLMs work.
This whole LLM/Chatbot thing is a plethora of incredibly stupid bad ideas, from the very basics of the underlying algorithm on up.