The positioning of LLM-based AI as a universal knowledge machine implies some pretty dubious epistemic premises, e.g. that the components of new knowledge are already encoded in language, and that the essential method for uncovering that knowledge is statistical.
Maybe no one in the field would explicitly claim those premises, but they're built into how the technology is being pitched to consumers.