A thought experiment in the National Library of Thailand—or why #ChatGPT (or any other language model) isn't actually understanding.
A thought experiment in the National Library of Thailand—or why #ChatGPT (or any other language model) isn't actually understanding.
@emilymbender I see a problem with asking a reader to analogize their idea of themselves understanding a language to the capabilities of an computational model.
I understand things with my brain, through mechanisms that I can barely understand, and which at any rate are vastly different from how an LLM operates.
The best I could do in the library to truly test the capabilities of an LLM would be to create one and use it. But now the thought experiment has become circular.
@emilymbender In other words, this would seem to eventually devolve into a variant of a Chinese Room experiment, with a precondition that whatever the computational rules are that will be employed in the "Thai Room" if you will, they must be computed from the data in the library.
But can a Chinese Room be said to understand Chinese? The best response to that is: how would one know of it did?