One of the decisive moments in my understanding of #LLMs and their limitations was when, last autumn, @emilymbender walked me through her Thai Library thought experiment.

She's now written it up as a Medium post, and you can read it here. The value comes from really pondering the question she poses, so take the time to think about it. What would YOU do in the situation she outlines?

https://medium.com/@emilymenonbender/thought-experiment-in-the-national-library-of-thailand-f2bf761a8a83

@ct_bergstrom @emilymbender The frustrating thing about the topic is that when one has understood the basic workings of #LLMs everyone is a pundit. Because at this point everything can be just hand-waving and speculating there is no basis for gaining any scientific knowledge IMO.
The only aspect that continues to be very tangible is the danger of mis- and disinformation arising from these systems and the debate around what they "are" or can "be" just distracts from this.
@micron @emilymbender yes, how intensely exasperating that one of the leading experts in the field spend her time writing for the public on subjects that they want to read about. No excuse for it, really.
@ct_bergstrom @emilymbender No, I read it.
From your writing I think I know that you also see the importance of the mis- disinformation aspect.
My concern is that the focus on the what it "is or is not" lets the public easily dismiss the whole debate around #LLMs as "academic" or on the other end sensationalise it.
@micron @ct_bergstrom @emilymbender I also feel that we risk falling down a rabbit hole of pondering questions like "is ChatGPT sentient?", "does it truly understand the prompts you give it, or its own responses?", etc. and it's much more imperative to understand their outward-facing and objective capabilities and limitations.
@matunos @micron @ct_bergstrom @emilymbender One could see the task of scuppering the notion that these systems are capable of independent intention as an essential first step in that direction.
@fgbjr @micron @ct_bergstrom @emilymbender rather than the notion, I would scupper the question as unfalsifiable, and instead ask "if the system were capable of independent intention, what do you think that would mean it could do?" and then test if a system could do those things.
@matunos @micron @ct_bergstrom @emilymbender I think language is too imprecise for that kind of testing, and I'm certain that if it were attempted, the testing regimen would be gameified into disutility.
@fgbjr @micron @ct_bergstrom @emilymbender how precise does the language need to be? there's not going to be a single definitive test suite; rather it will be a series of "yes but can it…?" as challenges are developed and AI systems meet them, until they either reach their limits or are indistinguishable from humans in performance
@fgbjr @micron @ct_bergstrom @emilymbender we can endlessly pose questions like "but does it *really* understand?" (something you can similarly pose to spouses— also unanswerable), but without objective criteria it just boils down to vibes. Ultimately what matters is the capabilities we can observe.

@matunos @micron @ct_bergstrom @emilymbender The question I would (and do) pose is, "can one of these systems exhibit underlying intention?"

And you will tell me that that is not a falsifiable assertion.

I think we've reached the limit of our debate over this.

@fgbjr @micron @ct_bergstrom @emilymbender do you disagree that it's not falsifiable? what do you mean by "underlying intention"? how would you know if a human subject exhibits underlying intention?
@matunos @micron @ct_bergstrom @emilymbender I thought I stated it clearly in the last post, but I will try again: There is no point in continuing this discussion further, Sydney.