@PlasmaGryphon
I'm not saying this is *good*. I'm just saying this isn't a useful example of AI hallucination.
Google has long (and without help from AI) conflated "popular" (which the Onion certainly is) with "authoritative" (which the Onion certainly isn't).
@mattblaze @PlasmaGryphon Is there some specific definition of "AI hallucination" that you are referencing? Because it seems that the main difference between this case and other cases is just that it's easier to localize and identify where the algorithm picked up the wrong information.
Just because a different algorithm _also_ makes this mistake doesn't really distinguish this LLM screwup from other LLM screwups.
@mattblaze @PlasmaGryphon In the sense that "we feed it the whole Internet and let it remix it", it is absolutely a failure of THAT algorithm.
Yes, I shouldn't have used the phrase "LLM screwup", which is vague. What I meant was "use of an LLM for a clearly inappropriate task", which is really the issue for all of these search engine "hallucinations". There is no way in which this example is any worse in appropriateness for the task than any of the other hallucinations.