The Future of Everything is Lies, I Guess

Some people point at LLMs confabulating, as if this wasn’t something humans are already widely known for doing.

I consider it highly plausible that confabulation is inherent to scaling intelligence. In order to run computation on data that due to dimensionality is computationally infeasible, you will most likely need to create a lower dimensional representation and do the computation on that. Collapsing the dimensionality is going to be lossy, which means it will have gaps between what it thinks is the reality and what is.

And is that considered a feature of humans or a bug?

Is it something we want to emulate?

The suggestion is that it is an intrinsic quality and therefore neither a feature nor a bug.

It's like saying, computation requires nonzero energy. Is that a feature or a bug? Neither, it's irrelevant, because it's a physical constant of the universe that computation will always require nonzero energy.

If confabulation is a physical constant of intelligence, then like energy per computation, all we can do is try to minimize it, while knowing it can never go to zero.