Lukas Muttenthaler

197 Followers
174 Following
81 Posts

PhD Student in ML and #NeuroAI at TU Berlin. Student Researcher at Google DeepMind. Guest researcher at MPI for Human Cognitive and Brain Sciences. Previously MSc in NLP at the University of Copenhagen.

Interested in all things concerned w/ (human and neural net) representation learning, PyTorch and #JAX. 
I deeply care about scientific rigor, honesty, transparency, and open-source.

🌍: https://lukasmut.github.io/
💻: https://github.com/LukasMut

Websitehttps://lukasmut.github.io/
GitHubhttps://github.com/LukasMut

If you're wondering what representational alignment actually is, or just want a primer on recent work to prep for the workshop, check out our recent preprint!

https://arxiv.org/abs/2310.13018

4/4

Getting aligned on representational alignment

Biological and artificial information processing systems form representations of the world that they can use to categorize, reason, plan, navigate, and make decisions. How can we measure the similarity between the representations formed by these diverse systems? Do similarities in representations then translate into similar behavior? If so, then how can a system's representations be modified to better match those of another system? These questions pertaining to the study of representational alignment are at the heart of some of the most promising research areas in contemporary cognitive science, neuroscience, and machine learning. In this Perspective, we survey the exciting recent developments in representational alignment research in the fields of cognitive science, neuroscience, and machine learning. Despite their overlapping interests, there is limited knowledge transfer between these fields, so work in one field ends up duplicated in another, and useful innovations are not shared effectively. To improve communication, we propose a unifying framework that can serve as a common language for research on representational alignment, and map several streams of existing work across fields within our framework. We also lay out open problems in representational alignment where progress can benefit all three of these fields. We hope that this paper will catalyze cross-disciplinary collaboration and accelerate progress for all communities studying and developing information processing systems.

arXiv.org

🚨Call for Papers 🚨

The Re-Align Workshop is coming to #ICLR2024

Our Call for Papers is finally up! Come share your representational alignment work at our interdisciplinary workshop at ICLR in beautiful Vienna!
representational-alignment.github.io

#neuroscience #ML #AI #cognition #NeuroAI @neuroscience @cogsci #cogsci

1/4

At the poster for
@lukasmut's paper "Improving neural network representations using human similarity judgments"
https://sigmoid.social/@lukasmut/110524246202395409
Wed 13 Dec 10:45 a.m. CST — 12:45 p.m. CST
Great Hall & Hall B1+B2 (level 1) #300
https://nips.cc/virtual/2023/poster/71848
And while you're in the Wednesday morning session, I can't make it due to the clash, but check out
@cogscikid
's great paper on "Combining behaviors with the successor feature keyboard"
https://twitter.com/cogscikid/status/1731683266942423318?t=b8CTL79Z6b01B8dyrypn_Q&s=19
#1913
https://nips.cc/virtual/2023/poster/72199
2/4
Lukas Muttenthaler (@[email protected])

🚨Beep beep 🚨 Have you ever been wondering about how to use human similarity judgments for improving neural network representations? 🧠 We have something for you! We found a linear transform that improves both representational alignment and downstream task performance! 🦾 https://arxiv.org/abs/2306.04507

Sigmoid Social

Very excited to head to NeurIPS! Feel free to reach out if you want to chat about any of our recent work on LMs, agents, interpretability, representational alignment, etc. You can find me:

At the poster for our work on "Passive learning of active causal strategies in agents and language models"
https://sigmoid.social/@lampinen/110434383859776741
Tue 12 Dec 5:15 p.m. CST — 7:15 p.m. CST
Great Hall & Hall B1+B2 (level 1) #825
https://nips.cc/virtual/2023/poster/72481
1/3
#Neurips #neurips2023

Andrew Lampinen (@[email protected])

What can be learned about causality and experimentation from passive data? What could language models learn from simply passively imitating text? We explore these questions in our new paper: “Passive learning of active causal strategies in agents and language models” https://arxiv.org/abs/2305.16183 Thread: 1/7 (x-post from https://twitter.com/AndrewLampinen/status/1662006807693336582?s=20) #llm #RL #causality #experiment #deeplearning #languagemodels #nlp #nlproc

Sigmoid Social

What is representational alignment? How can we use it to study or improve intelligent systems? What challenges might we face? In a new paper (arxiv.org/abs/2310.13018 ), we describe a framework that attempts to unify ideas from cognitive science, neuroscience and AI to address these questions.

This paper is very much a work in progress, so feedback welcome! And thanks to all the contributors, and particularly the first authors Ilia & @lukasmut for their hard work!

Got pretty lucky this year with 3/3 papers I was involved in accepted to #neurips #neurips2023!
Our work on passive learning of causal strategies: https://sigmoid.social/@lampinen/110434383859776741

@lukasmut's awesome project on using human similarity judgements to improve neural network representations:
https://sigmoid.social/@lukasmut/110524246202395409

And a secret(?) third one yet to be revealed!

Andrew Lampinen (@[email protected])

What can be learned about causality and experimentation from passive data? What could language models learn from simply passively imitating text? We explore these questions in our new paper: “Passive learning of active causal strategies in agents and language models” https://arxiv.org/abs/2305.16183 Thread: 1/7 (x-post from https://twitter.com/AndrewLampinen/status/1662006807693336582?s=20) #llm #RL #causality #experiment #deeplearning #languagemodels #nlp #nlproc

Sigmoid Social
This has been a stellar team effort w/ Lorenz Linhardt Jonas Dippel Robert A. Vandermeulen @khermann @lampinen @simonster 🧠🦾🎉
Side note: all of this only applies to CLIP models! ImageNet models fail to yield a best-of-both-worlds representation. Probably because the information encoded in their representations is not rich enough.
Together, we hope to provide a way forward in understanding the differences between human and neural network representation spaces and the interaction between local and global similarity structures of neural net representations more broadly.
Across a wide variety of few-shot learning and anomaly detection tasks, our transform considerably improves performance over the original representations. At the same time, the transform improves representational alignment for different human similarity judgment datasets, similar to a naive approach!