A thought experiment in the National Library of Thailand—or why #ChatGPT (or any other language model) isn't actually understanding.

https://medium.com/@emilymenonbender/thought-experiment-in-the-national-library-of-thailand-f2bf761a8a83

@emilymbender Fascinating thought experiment. Here's my take. An often overlooked feature of LLMs is word embedding which encodes words relationship relative to each other as vectors. Vectors for 'good' and 'bad' are opposite to each other. Vectors for dog breeds are clustered in the same region, etc. I believe that this vector space captures something from reality and that's what we call "meaning" and "understanding". Couldn't we build that vector space from the library?
@FranklinMaillot We could also work more with intention, meaning and understanding of actual humans instead. Much less effort.