Jascha Achterberg

@achterbrain
1.2K Followers
859 Following
540 Posts

#Computational #Neuroscience at #Cambridge University and #Intel.

I work on the connection of biological 🧠 and artificial 🤖 intelligence. By building neuro-inspired AI (from prefrontal cortex dynamics and circuit plasticity rules) I try to understand the general principles underlying computation in artificial and biological networks. I work with John Duncan and Matt Botvinick.

More on: https://www.jachterberg.com/

#AI #Neuroscience #NeuroAI #machinelearning #CognitiveAI

Websitehttps://www.jachterberg.com

Join our ARIA-funded project as a postdoc on brain-inspired computing 🤖🧠, at Imperial College London! Super exciting opportunity connecting both fundamental research and the creation of cutting-edge technologies!

#neuroscience #AI #ML #compneuro #NeuroAI

https://www.imperial.ac.uk/jobs/search-jobs/description/index.php?jobId=20479&jobTitle=Research+Associate+in+Computational+Neuroscience%2FNeuroAI%2FNeuromorphic+Systems

Description

Please note that job descriptions are not exhaustive, and you may be asked to take on additional duties that align with the key responsibilities ment...

Imperial College London

🚨 Submissions for #CCN2024 are now open at http://ccneuro.org!

Join us in Boston for a super fun conference full of AI + Neuro + CogSci -- cross-disciplinary & single discipline submissions all welcome 🧠🤖

CCN really is a great conference & I am excited to help organise it as part of the ECR committee!

#neuroscience #compneuro #neuroai

Redirect

🚨Call for Papers 🚨

The Re-Align Workshop is coming to #ICLR2024

Our Call for Papers is finally up! Come share your representational alignment work at our interdisciplinary workshop at ICLR in beautiful Vienna!
representational-alignment.github.io

#neuroscience #ML #AI #cognition #NeuroAI @neuroscience @cogsci #cogsci

1/4

Measuring Ca2+ and fMRI simultaneously in mice:

"Multimodal measures of spontaneous brain activity reveal both common and divergent patterns of cortical functional organization", Vafaii et al. 2023 with @PessoaBrain
https://www.nature.com/articles/s41467-023-44363-z

Finally a lab grabbed the bull by the horns and had a go at figuring out the neural basis of the BOLD signal.

And this is quite the assumption indeed in fMRI-based studies, which has been known to be false for a long time: "most work has assumed a disjoint functional network organization (i.e., brain regions belong to one and only one network)."

A significant extension over early work with fMRI on epileptic patients that had chronic electrode implants.

Looking forward to reading it slowly.

#neuroscience #fMRI

Multimodal measures of spontaneous brain activity reveal both common and divergent patterns of cortical functional organization - Nature Communications

The relationship between fMRI-BOLD and neural activity in the brain is not well understood. Here, the authors combine calcium imaging and fMRI in the mouse brain to compare network organization derived from these imaging modalities.

Nature

If you're wondering what representational alignment actually is, or just want a primer on recent work to prep for the workshop, check out our recent preprint!

https://arxiv.org/abs/2310.13018

4/4

Getting aligned on representational alignment

Biological and artificial information processing systems form representations of the world that they can use to categorize, reason, plan, navigate, and make decisions. How can we measure the similarity between the representations formed by these diverse systems? Do similarities in representations then translate into similar behavior? If so, then how can a system's representations be modified to better match those of another system? These questions pertaining to the study of representational alignment are at the heart of some of the most promising research areas in contemporary cognitive science, neuroscience, and machine learning. In this Perspective, we survey the exciting recent developments in representational alignment research in the fields of cognitive science, neuroscience, and machine learning. Despite their overlapping interests, there is limited knowledge transfer between these fields, so work in one field ends up duplicated in another, and useful innovations are not shared effectively. To improve communication, we propose a unifying framework that can serve as a common language for research on representational alignment, and map several streams of existing work across fields within our framework. We also lay out open problems in representational alignment where progress can benefit all three of these fields. We hope that this paper will catalyze cross-disciplinary collaboration and accelerate progress for all communities studying and developing information processing systems.

arXiv.org

Shout out to my fellow organizers:
Lukas Muttenthaler , Erin Grant, Ilia Sucholutsky & Katherine Hermann! The 5 of us can't wait to see you all in Vienna!

3/4

We're accepting short (up to 4 pages) and long (up to 9 pages) papers from cognitive science, neuroscience 🧠, ML 🤖, and other areas. Deadline is end of day on Feb 2nd, AoE.

If you want to receive updates or to become a reviewer, you can RSVP here: https://docs.google.com/forms/d/e/1FAIpQLScwBKbHKRjPDjV3-cieuyKST8eQT8_UVjAzYVdYyt2mJ0INjA/viewform

2/4

Re-Align: Workshop on Representational Alignment

The question of What makes a good representation? in machine learning can be addressed in one of several ways: By evaluating downstream behavior, by inspecting internal representations, or by characterizing a system’s inductive biases. Each of these methodologies involves measuring the alignment of an artificial intelligence system to a ground-truth system (usually a human or a population of humans) at some level of analysis (be it behavior, internal representation, or something in between). However, despite this shared goal, the machine learning, neuroscience, and cognitive science communities that study alignment among artificial and biological intelligence systems currently lack a shared framework for conveying insights across methodologies and disciplines. This workshop aims to bridge this gap by defining, evaluating, and understanding the implications of representational alignment among biological & artificial systems. We invite researchers across the machine learning, neuroscience, and cognitive science communities to contribute to this discussion in the form of invited talks, contributed papers, and structured discussions that address questions such as: How can we measure representational alignment among biological and artificial intelligence (AI) systems? Can representational alignment tell us if AI systems use the same strategies to solve tasks as humans do? What are the consequences (positive, neutral, and negative) of representational alignment? How does representational alignment connect to behavioral alignment and value alignment, as understood in AI safety and interpretability & explainability? How can we increase (or decrease) representational alignment of an AI system? How does the degree of representational alignment between two systems impact their ability to compete, cooperate, and communicate? While the focus of the workshop will generally be on the representational alignment of models with humans, we also welcome submissions regarding representational alignment in other settings (e.g. alignment of models with other models). To facilitate discussion during the workshop, the organizers prepared a reference paper highlighting key issues and publications within the topic of representational alignment. A concrete goal of the workshop is to expand this paper with any new insights generated during the workshop. The paper is available on arXiv. If you would be interested in participating, please let us know below! For more details: https://representational-alignment.github.io/

Google Docs

🚨Call for Papers 🚨

The Re-Align Workshop is coming to #ICLR2024

Our Call for Papers is finally up! Come share your representational alignment work at our interdisciplinary workshop at ICLR in beautiful Vienna!
representational-alignment.github.io

#neuroscience #ML #AI #cognition #NeuroAI @neuroscience @cogsci #cogsci

1/4

This paper from @dyamins and Kalanit Grill-Spector looks pretty cool. Need to read, but looks like a demonstration of how topography mixed with a principle @ShahabBakht discovered a couple of years ago about self-supervised learning could explain a lot about visual cortex organisation:

https://www.biorxiv.org/content/10.1101/2023.12.19.572460v1

Optimal information loading into working memory explains dynamic coding in the prefrontal cortex
https://doi.org/10.1073/pnas.2307991120
#neuroscience