The positioning of LLM-based AI as a universal knowledge machine implies some pretty dubious epistemic premises, e.g. that the components of new knowledge are already encoded in language, and that the essential method for uncovering that knowledge is statistical.

Maybe no one in the field would explicitly claim those premises, but they're built into how the technology is being pitched to consumers.

@lrhodes The entire premise of the current AI bubble vis a vis the belief that AGI is on the horizon seems to be, "…and then, a miracle happens."
I.e., I don't actually think AI stans believe all new knowledge is embedded within existing knowledge. I think they believe if they build a big enough LLM it will make some sort of magical leap and _transcend_ the data it was trained on.
This, of course, is nonsense. But I'm pretty sure it's what they think.

@jik @lrhodes they do. It is a really old current in Silicon Valley that has slowly infected a large part of the shared ethos of the valley.

It is hard to explain to people external to it because it is deeply under all of the valley "ideology".

LessWrong has a far larger impact than it looks.