The positioning of LLM-based AI as a universal knowledge machine implies some pretty dubious epistemic premises, e.g. that the components of new knowledge are already encoded in language, and that the essential method for uncovering that knowledge is statistical.

Maybe no one in the field would explicitly claim those premises, but they're built into how the technology is being pitched to consumers.

@lrhodes I think this is a very interesting discussion, also what we mean by "new knowledge". LLMs can for instance solve math or physics problems they have not encountered identically, right? So do we erase that exercise from the category "generating new knowledge" and so on
@Quantensalat @lrhodes they can also hallucinate what they claim are answers, and leave us to discriminate between them.
@Quantensalat @lrhodes no, they cannot, by design

@lrhodes

I have not seen it put this way before but indeed this does get at the heart of things.

I think getting this view does require someone to have a petty good intuitive for how LLMs work but once you do then this observation makes sense.

@lrhodes But see the "Platonic Representation Hypothesis" of Isola and others.
@lrhodes Trying to decode new knowledge from existing language feels like the process of which conspiracy theories are made.
@StaticR @lrhodes It is exactly how all fictions are made. LLM’s are more geared towards being a convincing fictional version of a human rather than an intelligent machine. It’s like talking to a conman - some truth with a few whoppers mixed in.
@lrhodes Also, encoded in one of a few dominant languages.

@lrhodes well, of course it is. We don't invent new syntactic structures every time we have a genuine new thought, and introducing new words that are not in some way derived from existing ones is extremely rare in natural languages.

But I guess, what you mean is denotation, i.e., the relation between terms and objects or concepts in the real world, and on that, we (probably) agree: LLMs don't have those.

@lrhodes
what if "A" is false in "A implies B"? I mean, maybe LLM-based AI is not positioning as universal knowledge machines. Someone said they were "statistically driven text extruding machines" (or so) that many people are willing to rent ($$$).
They might well be as revolutionary as the Gutenberg printer, but way less resources efficient. So, from The Capital perspective, what incentives have LLM-based IA owners to share the knowledge it eventually produced?
@lrhodes hence all the talk about world models and "embodied AI"

@lrhodes I would go further. Language being a projection of thought and knowledge (projection in the sense of “trace”, “dimension reduction”, “compression”), considering that “thinking” or “knowledge” could be generated from large scale discrete operations/recombinations on a auto-compressed medium (itself based on a dimension-reduced source material) is delusional.

I still find this interesting and fascinating (from when I was a cogsci/AI student), but less and less because of the toxic hype and the consequences (good job comparing to the CERN costs).

A fuzzy meta-book is not a brain.

@lrhodes The entire premise of the current AI bubble vis a vis the belief that AGI is on the horizon seems to be, "…and then, a miracle happens."
I.e., I don't actually think AI stans believe all new knowledge is embedded within existing knowledge. I think they believe if they build a big enough LLM it will make some sort of magical leap and _transcend_ the data it was trained on.
This, of course, is nonsense. But I'm pretty sure it's what they think.
@jik @lrhodes I think TESCREAL can be most accurately understood as a religious belief by people influenced by Christian and other dominant cultures but who don't have the philosophical background, and think they are "too smart", to recognize what is fundamentally a religious, superstitious belief, because it has sci-fi trappings.
@jik @lrhodes I'm going to bed but I think that beyond not having the mental tools to recognize the philosophy of what they are saying the techbros don't have the sense of community and society that was always a significant factor in the thinking of religious scholars of the past, which makes AGI theology unusually anti-social.
@jik @lrhodes There is an old book by Franck Herbert which prefectly explain that belief (https://en.wikipedia.org/wiki/Destination:_Void). In that book, they build a computer using infinite matrixes of matrixes and then, somehow, it become intelligent ... and a god, no less.
Destination: Void - Wikipedia

@jik @lrhodes they do. It is a really old current in Silicon Valley that has slowly infected a large part of the shared ethos of the valley.

It is hard to explain to people external to it because it is deeply under all of the valley "ideology".

LessWrong has a far larger impact than it looks.

@lrhodes good. At least we’ll never get rich output from ai and we’ll eventually grow tired of that when the novelty wears off.
@lrhodes
Even more dubious is the claim that finite state machine a are sufficient computational mechanism to accomplished that, Chonsky Hierarchy be dammed