Martin Schrimpf

@mschrimpf
129 Followers
30 Following
17 Posts
Modeling the Brain by bridging Machine Learning & Neuroscience; NeuroAI Prof at EPFL. Previously at MIT, Deep Learning at Salesforce, Co-founder at Integreat, Neuro at Harvard
Functional responses in the brain to linguistic inputs are spatially organized -- but why? We show that a simple smoothness loss added to language model training explains a range of topographic phenomena in neuroscience: https://arxiv.org/abs/2410.11516
#NeuroAI #neuroscience #compneuro #language
TopoLM: brain-like spatio-functional organization in a topographic language model

Neurons in the brain are spatially organized such that neighbors on tissue often exhibit similar response profiles. In the human language system, experimental studies have observed clusters for syntactic and semantic categories, but the mechanisms underlying this functional organization remain unclear. Here, building on work from the vision literature, we develop TopoLM, a transformer language model with an explicit two-dimensional spatial representation of model units. By combining a next-token prediction objective with a spatial smoothness loss, representations in this model assemble into clusters that correspond to semantically interpretable groupings of text and closely match the functional organization in the brain's language system. TopoLM successfully predicts the emergence of the spatio-functional organization of a cortical language system as well as the organization of functional clusters selective for fine-grained linguistic features empirically observed in human cortex. Our results suggest that the functional organization of the human language system is driven by a unified spatial objective, and provide a functionally and spatially aligned model of language processing in the brain.

arXiv.org
In 2021 we were surprised to find that untrained language models are already decent predictors of activity in the human language system (http://doi.org/10.1073/pnas.2105646118). In https://arxiv.org/abs/2406.15109, we identify the core architectural components as tokenization and aggregation.
With these findings we built a simple untrained network with SOTA alignment to brain and behavioral data - this feature encoder provides representations that are then useful for efficient language modeling.
#neuroai #llm #language
Applications now open for the Summer@EPFL program http://summer.epfl.ch -- 3-month fellowship for Bachelor/Master students to immerse yourself in cutting-edge research
Summer@EPFL

2. Under reasonable assumptions of inter-subject noise, prediction accuracy of neural activity is ~70% as good as it could possibly be. So even with these edge-case stimuli, gpt2-xl accounts for over 2/3 of the variance in the human language system
Two results in this work that I find especially interesting:
1. Model predictions for neural activity are as good with the extreme drive/suppress cases as with regular stimuli. This is different from vision where current models overpredict the modulation of neural activity https://doi.org/10.1126/science.aav9436
We previously found GPT (2) to be a strong model of the human language system (https://doi.org/10.1073/pnas.2105646118). Greta Tuckute pushes further and tests how well model-selected sentences can modulate neural activity: https://www.biorxiv.org/content/10.1101/2023.04.16.537080v1. Turns out you can almost double/completely suppress relative to baseline