RE: https://toot.cafe/@baldur/116130499944110898
I’m sure many other have made this observation, but even just reading this post without reading the linked article made me realise (or remember) that...
*All* LLM output is, in fact, a hallucination.
Because the way it formulates a “hallucination” *is exactly the same* as how it formulates a response *we don’t consider* a hallucination.
Same with “good” vs “bad” summaries (and whatever the relative occurrence of each is).










