It’s nice to see that my paper on large language models is getting attention . But some readers might be taking me to be saying things I'm not. So here’s a short clarificatory thread. https://arxiv.org/abs/2212.03551 1/4
Talking About Large Language Models

Thanks to rapid progress in artificial intelligence, we have entered an era when technology and philosophy intersect in interesting ways. Sitting squarely at the centre of this intersection are large language models (LLMs). The more adept LLMs become at mimicking human language, the more vulnerable we become to anthropomorphism, to seeing the systems in which they are embedded as more human-like than they really are. This trend is amplified by the natural tendency to use philosophically loaded terms, such as "knows", "believes", and "thinks", when describing these systems. To mitigate this trend, this paper advocates the practice of repeatedly stepping back to remind ourselves of how LLMs, and the systems of which they form a part, actually work. The hope is that increased scientific precision will encourage more philosophical nuance in the discourse around artificial intelligence, both within the field and in the public sphere.

arXiv.org
The paper is not making philosophical claims about belief, knowledge, or thought. Rather, the paper draws attention to the difference between humans, to whom such concepts naturally apply, and today’s LLM-based systems, where things get complicated. 2/4
The paper is not trying to ban words like “believes”, “knows”, or “thinks” in the context of LLMs. Rather, the paper is advocating caution, so people don’t take such words literally when they are meant only figuratively. 3/4
The paper is not making any claims about what systems based on LLMs might one day be capable of. The paper is neutral about this. 4/4
@mshanahan the paper tries to be neutral perhaps, but IMO that's not sustainable! Because some of the distinctions the paper draws, putting current LLMs on the "no REAL understanding" side of the line, can be so easily bridged.
@jmmcd I don't disagree they could be bridged, and the paper points in relevant directions, e.g. multi-modality and embodiment