A thought experiment in the National Library of Thailand—or why #ChatGPT (or any other language model) isn't actually understanding.

https://medium.com/@emilymenonbender/thought-experiment-in-the-national-library-of-thailand-f2bf761a8a83

@emilymbender I wanted to ask a/b something I didn't see discussed before.

The article posits that all meaning grounds in external reference, and that in lang acquisition that grounding must be direct or indirect (from a prior language).

ISTM *some* meaning can be grounded in self-reference: basic arithmetic for example (see "Contact) which yields meanings of truth and falsity.

Self-referential meaning could also arise out of language instructional texts, such as for children.

@emilymbender For a concrete example, one could derive a concept of the number 3 from observation of a "counted list" pattern:

"Here are three examples of reptiles: snakes, lizards, and turtles."

@emilymbender So, my question is, does that demonstrate that there exists a category of meaning (self-referentially-derivable meaning) that could potentially be learned purely from textual observation? If so, do we know anything about that category, and does it provide useful bounds on what kind of intelligence we could expect from a hypothesized omnipotent language model?