The Google AI summary suggesting that people eat rocks is amusing, but it's not a great example of AI "hallucination". The text is a pretty straight and accurate summary of a satirical Onion article. This isn't a complex algorithm synthesizing bogus conclusions from good data (something that's definitely a real risk in AI systems). This is simply Google mis-categorizing non factual input as factual, something it could have (and has) done just as easily without "AI".
@mattblaze the same was true of the ones I've seen for fighting snakes are a thesis defense, recipes for gasoline pizza and glue in pizza, and a couple others. But doesn't help that it has stripped the source and gives the impression its a synthesis of many sources when it actually just grabbed one source.

@PlasmaGryphon
I'm not saying this is *good*. I'm just saying this isn't a useful example of AI hallucination.

Google has long (and without help from AI) conflated "popular" (which the Onion certainly is) with "authoritative" (which the Onion certainly isn't).

@mattblaze @PlasmaGryphon While true, there’s a big difference between a search returning text that claims A with no clear source, simply presented as fact, and returning a link that has text that says A, where the link itself tells the story.

@WhiteCatTamer @PlasmaGryphon Which part of "I didn't say this is *good* is unclear here?

Not everything that's wrong with Google search results has to do with large language models.

@mattblaze @PlasmaGryphon I’m not saying you’re saying it’s good, I’m saying that your saying that Google could do this without AI by linking to the site is true, but there’s still important information that gets left out by doing it this way, that is, through an AI declaring it.

It’s not hallucination, but the medium is part of the message.