A thought experiment in the National Library of Thailand—or why #ChatGPT (or any other language model) isn't actually understanding.
A thought experiment in the National Library of Thailand—or why #ChatGPT (or any other language model) isn't actually understanding.
@osma @emilymbender
The difference is that the Chinese Room has a big book of instructions telling you how to create a response using Chinese characters. The Thai library doesn't even have that and you must somehow write this big book of instructions yourself.
I imagine archaeologists discovering a library of a lost civilization who had figured out how to communicate with an alien race. This alien race has now transmitted a question to modern humans. How do we formulate a response?
@osma @emilymbender
Although the now very closed "Open"-AI keep the nature of GPT4 a proprietary secret, it is widely assumed the same InstructGPT training procedure was used at some point in its development
Or perhaps like the fine tuned derivatives of the original LLaMa which have earned their own name (Alpaca, Vicuña, etc), the instruction following aligned GPT4 derivative that everyone is using via web API, officially goes by a different name but is erroneously referred to as GPT4
@osma @emilymbender
And a key flaw of these thought experiments is that we still can assume sentient being communicating with another sentient being and having a lot in common - needing to eat, having to ask for things of others, forming collectives, motivation vs instinct. I thought this scene in Arrival was most enlightening:
But how does a LLM on a computer that has never had to beg to be provided with electricity understand the concept of a child asking for food?