The positioning of LLM-based AI as a universal knowledge machine implies some pretty dubious epistemic premises, e.g. that the components of new knowledge are already encoded in language, and that the essential method for uncovering that knowledge is statistical.

Maybe no one in the field would explicitly claim those premises, but they're built into how the technology is being pitched to consumers.

@lrhodes I would go further. Language being a projection of thought and knowledge (projection in the sense of “trace”, “dimension reduction”, “compression”), considering that “thinking” or “knowledge” could be generated from large scale discrete operations/recombinations on a auto-compressed medium (itself based on a dimension-reduced source material) is delusional.

I still find this interesting and fascinating (from when I was a cogsci/AI student), but less and less because of the toxic hype and the consequences (good job comparing to the CERN costs).

A fuzzy meta-book is not a brain.