Three years in the making - our big review/opinion piece on the capabilities of large language models (LLMs) from the cognitive science perspective.

Thread below! 1/

#AI #cogneuro #NLP #LLMs #languageandthought

https://arxiv.org/abs/2301.06627

Dissociating language and thought in large language models

Large Language Models (LLMs) have come closest among all models to date to mastering human language, yet opinions about their linguistic and cognitive capabilities remain split. Here, we evaluate LLMs using a distinction between formal linguistic competence -- knowledge of linguistic rules and patterns -- and functional linguistic competence -- understanding and using language in the world. We ground this distinction in human neuroscience, which has shown that formal and functional competence rely on different neural mechanisms. Although LLMs are surprisingly good at formal competence, their performance on functional competence tasks remains spotty and often requires specialized fine-tuning and/or coupling with external modules. We posit that models that use language in human-like ways would need to master both of these competence types, which, in turn, could require the emergence of mechanisms specialized for formal linguistic competence, distinct from functional competence.

arXiv.org
The key point we’re making is the distinction between *formal competence* - the knowledge of linguistic rules and patterns - and *functional competence* - a set of skills required to use language in real-world situations. 2/

We ground this distinction in cognitive neuroscience.

Years of empirical work show that humans have specialized neural machinery for language processing (reading, listening, speaking, etc), which is distinct from brain mechanisms underlying other cognitive capacities (social reasoning, intuitive physics, logic and math…) 3/

Armed with the formal/functional distinction, we thoroughly review the NLP literature. We show that, on one hand, LLMs are surprisingly good at *formal* linguistic competence, making significant progress at learning phonology, morphosyntax, etc etc. 4/
On the other hand, LLMs are still quite bad at most aspects of functional competence (math, reasoning, world knowledge) - especially when it deviates from commonly occurring text patterns. 5/

We argue that the word-in-context prediction objective is not enough to master human thought (even though it’s surprisingly effective for learning much about language!).

Instead, like human brains, models that strive to master both formal & functional language competence will benefit from modular components - either built-in explicitly or emerging through a careful combo of data+training objectives+architecture. 6/

ChatGPT, with its combination of next-word prediction and RLHF objectives, might be a step in that direction (although it still can’t think imo). 7/

The formal/functional distinction is important for clarifying much of the current discourse around LLMs. Too often, people mistake coherent text generation for thought or even sentience. We call this a “good at language = good at thought” fallacy. 8/

https://theconversation.com/googles-powerful-ai-spotlights-a-human-cognitive-glitch-mistaking-fluent-speech-for-fluent-thought-185099

Google's powerful AI spotlights a human cognitive glitch: Mistaking fluent speech for fluent thought

Fluent expression is not always evidence of a mind at work, but the human brain is primed to believe so. A pair of cognitive linguistics experts explain why language is not a good test of sentience.

The Conversation
Similarly, criticisms directed at LLMs center on their inability to think (or do math, or maintain a coherent worldview) and sometimes overlook their impressive advances in language learning. We call this a “bad at thought = bad at language” fallacy. 9/

It’s been fun working with a brilliant team of coauthors - @kmahowald @ev_fedorenko @ibandlank @NancyKanwisher & Josh Tenenbaum

We’ve done a lot of work refining our views and revising our arguments every time a new big model came out. In the end, we still think a cogsci perspective is valuable - and hope you do too :) 10/10

P.S. Although we have >20 pages of references, we are likely missing stuff. If you think we don’t cover important work, pls comment below! We also under-cover certain topics (grounding, memory, etc) - if you think something doesn’t square with the formal/functional distinction, let us know.