The Google AI summary suggesting that people eat rocks is amusing, but it's not a great example of AI "hallucination". The text is a pretty straight and accurate summary of a satirical Onion article. This isn't a complex algorithm synthesizing bogus conclusions from good data (something that's definitely a real risk in AI systems). This is simply Google mis-categorizing non factual input as factual, something it could have (and has) done just as easily without "AI".
@mattblaze the same was true of the ones I've seen for fighting snakes are a thesis defense, recipes for gasoline pizza and glue in pizza, and a couple others. But doesn't help that it has stripped the source and gives the impression its a synthesis of many sources when it actually just grabbed one source.

@PlasmaGryphon
I'm not saying this is *good*. I'm just saying this isn't a useful example of AI hallucination.

Google has long (and without help from AI) conflated "popular" (which the Onion certainly is) with "authoritative" (which the Onion certainly isn't).

@mattblaze @PlasmaGryphon The problem here is that today's AI systems are really bad at understanding context, humor, sarcasm, etc. If I do a search on loose cheese on pizza and see a link to theonion.com, I know the context of what I'll get. An LLM does not, which means it should have been excluded from the training data. But can Google do that with content (like snakes at dissertation defenses) that has been reposted? I suspect that if anyone can, they can—but canthey?
@SteveBellovin @mattblaze @PlasmaGryphon the problem is LLM have no concept of anything.