I'm sure language change will do it's thing and it will soon be 'The Marr the Marrier'.
#Marr #cognition #cogneuro #Marrslevels #languagechange #historicallinguistics
A systematic review I helped with is out today! We summarise the existing research using flickering stimuli* to understand visual cognition in the first 6 years of life, with explanations of the different methodological approaches and some of the insights these approaches have yielded
*also known as frequency-tagging, fast periodic visual stimulation, rhythmic visual stimulation, and many other names
@sandervanbree gave a talk at @CCNiUofGlasgow today presenting his recent Perspectives paper on Neural Mechanisms in Cognitive Neuroscience
📄: https://journals.sagepub.com/doi/10.1177/17456916231191744
He combines Marr's 3 levels with ideas from mechanistic philosophy to explain which mechanisms produce specific computations in the brain 🧠.
To understand a system, counterfactual knowledge is required (i.e. how it behaves with some wiggle room).
Three years in the making - our big review/opinion piece on the capabilities of large language models (LLMs) from the cognitive science perspective.
Thread below! 1/
Large Language Models (LLMs) have come closest among all models to date to mastering human language, yet opinions about their linguistic and cognitive capabilities remain split. Here, we evaluate LLMs using a distinction between formal linguistic competence -- knowledge of linguistic rules and patterns -- and functional linguistic competence -- understanding and using language in the world. We ground this distinction in human neuroscience, which has shown that formal and functional competence rely on different neural mechanisms. Although LLMs are surprisingly good at formal competence, their performance on functional competence tasks remains spotty and often requires specialized fine-tuning and/or coupling with external modules. We posit that models that use language in human-like ways would need to master both of these competence types, which, in turn, could require the emergence of mechanisms specialized for formal linguistic competence, distinct from functional competence.
MIT News covered our latest study of computer programming in the brain 🙂
Researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) found that the Multiple Demand and Language brain systems encode specific code properties and uniquely align with machine-learned representations of code.