A thought experiment in the National Library of Thailand—or why #ChatGPT (or any other language model) isn't actually understanding.

https://medium.com/@emilymenonbender/thought-experiment-in-the-national-library-of-thailand-f2bf761a8a83

@emilymbender
- this assumes that because the task is hard for a human, it would also be hard for a machine.
- it implies that the only meaning is human-like meaning, which is not strictly true.

Let's instead try to learn arithmetic from a long list of solved equations - possible even for a human.

Meaning is how things relate to other things (?). But for LLM, the whole universe consists of words, understanding how they relate to each other is all the meaning there is.

@emilymbender Conversely, we humans are able to assign meaning to things in our sensory universe without any Rosetta stone, just by observing patterns.

We are just so predisposed for our sensory universe that discovering meaning of patterns in other universes - for example long lists of numbers describing turbulent flows- is comparatively really hard for us. But might not be for a machine.