🚨Call for Papers 🚨

The Re-Align Workshop is coming to #ICLR2024

Our Call for Papers is finally up! Come share your representational alignment work at our interdisciplinary workshop at ICLR in beautiful Vienna!
representational-alignment.github.io

#neuroscience #ML #AI #cognition #NeuroAI @neuroscience @cogsci #cogsci

1/4

We're accepting short (up to 4 pages) and long (up to 9 pages) papers from cognitive science, neuroscience 🧠, ML 🤖, and other areas. Deadline is end of day on Feb 2nd, AoE.

If you want to receive updates or to become a reviewer, you can RSVP here: https://docs.google.com/forms/d/e/1FAIpQLScwBKbHKRjPDjV3-cieuyKST8eQT8_UVjAzYVdYyt2mJ0INjA/viewform

2/4

Re-Align: Workshop on Representational Alignment

The question of What makes a good representation? in machine learning can be addressed in one of several ways: By evaluating downstream behavior, by inspecting internal representations, or by characterizing a system’s inductive biases. Each of these methodologies involves measuring the alignment of an artificial intelligence system to a ground-truth system (usually a human or a population of humans) at some level of analysis (be it behavior, internal representation, or something in between). However, despite this shared goal, the machine learning, neuroscience, and cognitive science communities that study alignment among artificial and biological intelligence systems currently lack a shared framework for conveying insights across methodologies and disciplines. This workshop aims to bridge this gap by defining, evaluating, and understanding the implications of representational alignment among biological & artificial systems. We invite researchers across the machine learning, neuroscience, and cognitive science communities to contribute to this discussion in the form of invited talks, contributed papers, and structured discussions that address questions such as: How can we measure representational alignment among biological and artificial intelligence (AI) systems? Can representational alignment tell us if AI systems use the same strategies to solve tasks as humans do? What are the consequences (positive, neutral, and negative) of representational alignment? How does representational alignment connect to behavioral alignment and value alignment, as understood in AI safety and interpretability & explainability? How can we increase (or decrease) representational alignment of an AI system? How does the degree of representational alignment between two systems impact their ability to compete, cooperate, and communicate? While the focus of the workshop will generally be on the representational alignment of models with humans, we also welcome submissions regarding representational alignment in other settings (e.g. alignment of models with other models). To facilitate discussion during the workshop, the organizers prepared a reference paper highlighting key issues and publications within the topic of representational alignment. A concrete goal of the workshop is to expand this paper with any new insights generated during the workshop. The paper is available on arXiv. If you would be interested in participating, please let us know below! For more details: https://representational-alignment.github.io/

Google Docs

Shout out to my fellow organizers:
Lukas Muttenthaler , Erin Grant, Ilia Sucholutsky & Katherine Hermann! The 5 of us can't wait to see you all in Vienna!

3/4

If you're wondering what representational alignment actually is, or just want a primer on recent work to prep for the workshop, check out our recent preprint!

https://arxiv.org/abs/2310.13018

4/4

Getting aligned on representational alignment

Biological and artificial information processing systems form representations of the world that they can use to categorize, reason, plan, navigate, and make decisions. How can we measure the similarity between the representations formed by these diverse systems? Do similarities in representations then translate into similar behavior? If so, then how can a system's representations be modified to better match those of another system? These questions pertaining to the study of representational alignment are at the heart of some of the most promising research areas in contemporary cognitive science, neuroscience, and machine learning. In this Perspective, we survey the exciting recent developments in representational alignment research in the fields of cognitive science, neuroscience, and machine learning. Despite their overlapping interests, there is limited knowledge transfer between these fields, so work in one field ends up duplicated in another, and useful innovations are not shared effectively. To improve communication, we propose a unifying framework that can serve as a common language for research on representational alignment, and map several streams of existing work across fields within our framework. We also lay out open problems in representational alignment where progress can benefit all three of these fields. We hope that this paper will catalyze cross-disciplinary collaboration and accelerate progress for all communities studying and developing information processing systems.

arXiv.org