Andrew Lampinen

766 Followers
264 Following
130 Posts
Interested in cognition and artificial intelligence. Research Scientist at DeepMind. Posts are mine.
Twitterhttps://twitter.com/AndrewLampinen
Websitehttps://lampinen.github.io
Publicationshttps://scholar.google.com/citations?hl=en&user=_N44XxAAAAAJ&view_op=list_works&sortby=pubdate
Pleased to share that our paper (https://sigmoid.social/@lampinen/112491958002918498) is now accepted at TMLR! The camera-ready version should be clearer and improved thanks to the helpful reviewer comments (and others). Thanks again to my co-authors @stephaniechan and @khermann
The TMLR version is here: https://openreview.net/forum?id=aY2nsgE97a
and the arXiv version (https://arxiv.org/abs/2405.05847) should be updated to match shortly. Check it out if you're interested in interpretability, and its challenges!
#interpretability
Andrew Lampinen (@[email protected])

How well can we understand an LLM by interpreting its representations? What can we learn by comparing brain and model representations? Our new paper (https://arxiv.org/abs/2405.05847) highlights intriguing biases in learned feature representations that make interpreting them more challenging! 1/9 #intrepretability #deeplearning #representation #transformers

Sigmoid Social
Really excited to share that I'm hiring for a Research Scientist position in our team! If you're interested in the kind of cognitively-oriented work we've been doing on learning & generalization, data properties, representations, LMs, or agents, please check it out!
https://boards.greenhouse.io/deepmind/jobs/6182852
#research #jobs
Research Scientist, Cognition, Mountain View

Mountain View, California, US

Pleased to share my paper "Can language models handle recursively nested grammatical structures? A case study on comparing models and humans" was accepted to Computational Linguistics! Early journal version here: https://direct.mit.edu/coli/article/doi/10.1162/coli_a_00525/123789/Can-language-models-handle-recursively-nested
Check it out if you're generally interested in assessing LMs against human capabilities, or in LM capabilities for processing center embedding specifically.
Andrew Lampinen (@[email protected])

Attached: 1 image Very excited to share a substantially updated version of our preprint “Language models show human-like content effects on reasoning tasks!” TL;DR: LMs and humans show strikingly similar patterns in how the content of a logic problem affects their answers. Thread: 1/10 #LanguageModels #lms #AI #cogsci #machinelearning #nlp #nlproc #cognitivescience

Sigmoid Social

Pleased to share that our paper "Language models, like humans, show content effects on reasoning tasks" is now published in PNAS Nexus!
https://academic.oup.com/pnasnexus/article/3/7/pgae233/7712372

#LanguageModels #lms #AI #cogsci #machinelearning #nlp #nlproc #cognitivescience

Language models, like humans, show content effects on reasoning tasks

Abstract. Abstract reasoning is a key ability for an intelligent system. Large language models (LMs) achieve above-chance performance on abstract reasoning

OUP Academic
@gmusser @thetransmitter @kristin_ozelli @KathaDobs @ev_fedorenko @lampinen @Neurograce @UlrikeHahn 👆I really recommend this. There is a fair bit of comment on this platform on how non-human-like artificial neural networks and #LLMs are, but most of it is not by cognitive scientists or neuroscientists. This article is an informative counterbalance.
At a detailed level, artificial neural networks look very different from natural brains. At a higher level, they are uncannily similar. My story for @thetransmitter, edited by @kristin_ozelli, features @KathaDobs @ev_fedorenko @lampinen @Neurograce and others. https://www.thetransmitter.org/neural-networks/can-an-emerging-field-called-neural-systems-understanding-explain-the- #neuroscience #AI #LLMs
Can an emerging field called ‘neural systems understanding’ explain the brain?

This mashup of neuroscience, artificial intelligence and even linguistics and philosophy of mind aims to crack the deep question of what "understanding" is…

The Transmitter: Neuroscience News and Perspectives
This paper is really just us *finally* following up on a weird finding about RSA (figure on the here) from a paper Katherine Hermann & I had at NeurIPS back in the dark ages (2020): https://x.com/khermann_/status/1323353860283326464
Thanks to my coauthors @scychan_brains & Katherine! 9/9
Katherine Hermann (@khermann_) on X

Representations were more similar across runs in models trained to classify easy than hard features. The easy feature dominated in multi-task models. Model representations were more similar to those in an untrained model than a model trained on another task. (6/6)

X (formerly Twitter)

We also just find these results inherently interesting for what they suggest about the inductive biases of deep learning models + gradient descent. See the paper for lots of discussion of related work on (behavioral) simplicity biases and much more!

We’ve also released a colab which provides a very minimal demo of the basic easy-hard representation bias effect if you want to explore it for yourself: https://gist.github.com/lampinen-dm/b6541019ef4cf2988669ab44aa82460b
8/9

feature_complexity_bias_demo.ipynb

feature_complexity_bias_demo.ipynb. GitHub Gist: instantly share code, notes, and snippets.

Gist
Similarly, computational neuroscience compares models to brains using techniques like regression or RSA. But RSA between our models shows strong biases (the stripes); e.g. a model doing two complex tasks appears less similar to another model doing *exactly the same* tasks than it does to a model doing only simple tasks! 7/9