Lukas Muttenthaler

197 Followers
174 Following
81 Posts

PhD Student in ML and #NeuroAI at TU Berlin. Student Researcher at Google DeepMind. Guest researcher at MPI for Human Cognitive and Brain Sciences. Previously MSc in NLP at the University of Copenhagen.

Interested in all things concerned w/ (human and neural net) representation learning, PyTorch and #JAX. 
I deeply care about scientific rigor, honesty, transparency, and open-source.

🌍: https://lukasmut.github.io/
💻: https://github.com/LukasMut

Websitehttps://lukasmut.github.io/
GitHubhttps://github.com/LukasMut
Across a wide variety of few-shot learning and anomaly detection tasks, our transform considerably improves performance over the original representations. At the same time, the transform improves representational alignment for different human similarity judgment datasets, similar to a naive approach!
While the original representations are locally accurate (that’s what the pretraining objectives shoot for), they are poorly organized globally. Our transform restructures the representation space in a globally more meaningful and human-aligned way while preserving local structure.
Naively aligning neural network representations with human similarity judgments improves representational alignment but hurts downstream task performance quite a bit. Maximizing representational alignment while preserving a model's local similarity structure yields a best-of-both-worlds representation! 🧠🤖