It’s nice to see that my paper on large language models is getting attention . But some readers might be taking me to be saying things I'm not. So here’s a short clarificatory thread. https://arxiv.org/abs/2212.03551 1/4
Talking About Large Language Models

Thanks to rapid progress in artificial intelligence, we have entered an era when technology and philosophy intersect in interesting ways. Sitting squarely at the centre of this intersection are large language models (LLMs). The more adept LLMs become at mimicking human language, the more vulnerable we become to anthropomorphism, to seeing the systems in which they are embedded as more human-like than they really are. This trend is amplified by the natural tendency to use philosophically loaded terms, such as "knows", "believes", and "thinks", when describing these systems. To mitigate this trend, this paper advocates the practice of repeatedly stepping back to remind ourselves of how LLMs, and the systems of which they form a part, actually work. The hope is that increased scientific precision will encourage more philosophical nuance in the discourse around artificial intelligence, both within the field and in the public sphere.

arXiv.org
The paper is not making philosophical claims about belief, knowledge, or thought. Rather, the paper draws attention to the difference between humans, to whom such concepts naturally apply, and today’s LLM-based systems, where things get complicated. 2/4

@mshanahan Hi, hope you don't mind me continuing the interesting discussion around the paper here as well :)

fwiw, I actually think you are, inescapably, making philosophical claims about belief, knowledge, and thought. And that that is *fantastic*.

The reason simply being that you can't substantively compare AI/LLMs and humans along concepts such as belief, knowledge, or thought, without implicit philosophical claims and assumptions.

@mshanahan I agree with humans and LLMs being (self-evidently) different in this regard, but I think the value of the paper beyond the behavioural nudge for caution, is the opening up and underlining the various aspects relevant in disambiguating the particular behaviours/processes so tempting to conflate.

This explorative comparative approach is in my opinion philosophy, and am curious whether you don't want, or are simply cautious, to lean in and embrace the philosophical aspect.

@marcolin Yes, I agree that is philosophy, and I'm happy to embrace it
@marcolin Yes, the paper is philosophical. But would it be making a philosophical claim about knowledge to point out to someone who says that Wikipedia *knows* who won the 2022 World Cup that they are using the word in a different sense to someone who says that Lionel Messi *knows* who won the 2022 World Cup?

@mshanahan

Not just like that, but it is when you elaborate on the ways that is the case.

*THAT* they are different isn't the substantive philosophical claim.

The aspects/conditions you draw upon in elaborating upon the difference, *is* a philosophical claim imo, even if implicitly. Although I might say more "stance" than claim.

For example the propositions' reflexive significance (special to the system itself), or the relation to an internalised notion of truth/falsehood.

@mshanahan

In addition, there's also imo a difference between explicit philosophical claims/arguments, versus the implicit *effect* of eliciting philosophical inquiry in the reader.

The beauty imo, is that a comparative treatment as yours, both elicits the philosophical reflection on these terms in relation to ourselves as well as LLMs/AI.

And I don't think there's another way than to be philosophical about both.

@marcolin Yes, I take your point. Maybe there's a better way to characterise what I'm *not* doing, philosophically, than the way I put it in that toot / tweet. (And thanks for the kind words.)