One of the decisive moments in my understanding of #LLMs and their limitations was when, last autumn, @emilymbender walked me through her Thai Library thought experiment.

She's now written it up as a Medium post, and you can read it here. The value comes from really pondering the question she poses, so take the time to think about it. What would YOU do in the situation she outlines?

https://medium.com/@emilymenonbender/thought-experiment-in-the-national-library-of-thailand-f2bf761a8a83

@ct_bergstrom @emilymbender That's very good. One difference going forward between #LLMs and Emily's "stuck in a Thai library with only words", is that Bing-style ChatGPT gets to make up answers and see how real people respond. If you were smart, couldn't speak Thai, were stuck in a Thai library, *and* you could try out sentences on Thai people to see their responses, could you gradually build up some concepts of what words mean? Or, do you still need some external context to apply meaning?
@joncounts @ct_bergstrom @emilymbender That's what I was thinking. If a LLM is allowed to train on the queries it gets, it should be able to learn how to manipulate them. The "reality" of an LLM is the feedback it can get from the answers it sends out.
@emilymbender @gluejar @ct_bergstrom @joncounts Get feedback… From humans, on the internet? That’s not going to turn out well.