One of the decisive moments in my understanding of #LLMs and their limitations was when, last autumn, @emilymbender walked me through her Thai Library thought experiment.

She's now written it up as a Medium post, and you can read it here. The value comes from really pondering the question she poses, so take the time to think about it. What would YOU do in the situation she outlines?

https://medium.com/@emilymenonbender/thought-experiment-in-the-national-library-of-thailand-f2bf761a8a83

@ct_bergstrom @emilymbender Interesting thought experiment very similar to Searle’s Chinese Room. As far as I understand that without studying it deeply, aren’t the “unlimited time”, our limitations wrt memory, and our own expectation of frustration doing too much work here?

@b3n @ct_bergstrom @emilymbender definitely similarities - sufficient that it was my first thought. All though this is a different angle of attack - how would you learn vs had the system learned. To some extent the Chat engines are being given external input by users as additional training but it seems to me that pales in comparison to their base model.

I'm unsure if the grand plan is to get so many people using them that their feedback becomes significant.

@b3n @ct_bergstrom @emilymbender the real problem with that is an insufficient number of people can tell good output from hallucinogenic lies. So at best we'd be training the systems to be better liars or at least more agreeable ones.

It boggles my mind that no other than Google failed to launch their Bard system with a built in bullshit detector. They have a huge knowledge graph that could easily be analysing output for BS.

@b3n @ct_bergstrom @emilymbender another issue seems to be that even if we could convince ourselves that these models "understand" then what use is understanding without empathy or consequences? Congratulations you just built yourself a chat psychopath.