A thought experiment in the National Library of Thailand—or why #ChatGPT (or any other language model) isn't actually understanding.

https://medium.com/@emilymenonbender/thought-experiment-in-the-national-library-of-thailand-f2bf761a8a83

@emilymbender I see a problem with asking a reader to analogize their idea of themselves understanding a language to the capabilities of an computational model.

I understand things with my brain, through mechanisms that I can barely understand, and which at any rate are vastly different from how an LLM operates.

The best I could do in the library to truly test the capabilities of an LLM would be to create one and use it. But now the thought experiment has become circular.

@emilymbender In other words, this would seem to eventually devolve into a variant of a Chinese Room experiment, with a precondition that whatever the computational rules are that will be employed in the "Thai Room" if you will, they must be computed from the data in the library.

But can a Chinese Room be said to understand Chinese? The best response to that is: how would one know of it did?

@emilymbender if, instead of a subjective sense of understanding, we had a list of tasks that I would be expected to perform to demonstrate my understanding— tasks we expect that humans could only perform well if they understand a language— then I suspect I could theoretically achieve success in at least some them, without actually understanding the language myself.