Andrew Saxe

774 Followers
179 Following
15 Posts
Professor at Gatsby and SWC, UCL, trying to figure out how we learn.
Webwww.saxelab.org

We are delighted to announce that Tim Behrens (@behrenstimb) has joined SWC as a Group Leader.

The Behrens Lab will strive to understand the neural mechanisms that support flexible goal-directed behaviour. In doing so, the team hope to build new bridges between human and animal neuroscience, between biological and artificial intelligence, and new methods for integrating across scales of neural activity.

Find out more: https://www.sainsburywellcome.org/web/research-news/tim-behrens-joins-swc-group-leader

Tim Behrens joins SWC as Group Leader | Sainsbury Wellcome Centre

📢​ Applications for ‘Analytical Connectionism’ (28 Aug-8 Sep 2023) are open!

The course will bring together neuroscience, psychology and ML communities, and will introduce analytical methods for neural-network analysis and connectionist theories of higher-level cognition & psychology.

The course will end with a 1.5-day workshop (limited spaces available for researchers who only want to attend the workshop), during which you will hear from experts about current state-of-the-art and the limits of our current understanding.

Organised by Stefano Sarao Mannelli, @SaxeLab and Peter Latham

⏰​ Deadline: 15 May (summer course) & 30 June (workshop)

ℹ️​ More info & how to apply: https://www.ucl.ac.uk/gatsby/analytical-connectionism-2023

Analytical Connectionism 2023

Analytical Connectionism - a 2-week summer course on analytical tools for probing neural networks and higher-level cognition

Gatsby Computational Neuroscience Unit
Hello everyone! I'm a Postdoctoral research fellow based at Cambridge in computational neuroscience. My research focus is the prefrontal cortex and working memory. Excited to follow comp neuro discussions on here. #introductions #computationalneuroscience #neuroscience
🚨 Our story on a AI-inspired model of cerebro-cerebellar networks is now out in @NatureComms with a few (useful) updates after peer review:
https://doi.org/10.1038/s41467-022-35658-8
---
RT @somnirons
New preprint by @boven_ellen @JoePemberton9 with Paul Chadderton and Richard Apps @BristolNeuroscience! Inspired by DL algorithms @maxjaderberg @DeepMind we propose that the cerebellum provides the cerebrum with task-specific feedback pred…
https://twitter.com/somnirons/status/1493881849055227906
Two new #postdoc positions in the labs of Laurenz Wiskott (Bochum, Germany) and @fzenke (Basel, Switzerland) have been announced. Find this and further open positions in our #BernsteinNetwork job pool: https://bit.ly/BN_Jobs
Job Pool – Bernstein Network Computational Neuroscience

Bernstein Network Computational Neuroscience
Hi, I am a theoretical physicist working in #statisticalphysics of disordered systems, #machinelearning, #informationtheory etc.. Interested in #science, #littérature (mostly in French, but also in English and Italian), #classicalmusic. I have been working on quite a few different topics, my two books "Spin glass theory and beyond", written with G. Parisi and M.A.Virasoro and "Information, Physics and Computation", written with A. Montanari, give some partial idea of my center of interest

@johndmurray @takuito

This is a beautiful paper!

And a taste of what large open datasets can contribute.

Exciting (independent) Postdoctoral Fellow opportunities at the intersection of #Neuroscience and #AI (#NeuroAI) at the Kempner Institute at Harvard. Please boost to increase our reach and diversity.

https://www.harvard.edu/kempner-institute/the-kempner-institute-for-the-study-of-artificial-and-natural-intelligence/opportunities/the-kempner-institute-postdoctoral-fellowship/

The Kempner Institute Postdoctoral Fellowship

Kempner Institute

Going forward, #DeepLabCut updates will come via medium (and GitHub of course), but not the bird. This is sad, as the science Twitter community, I believe, was so important in getting out the word early on our very first code release and paper. But, it’s not okay anymore 💔…

📣🚨To get the latest updates on our code releases going forward, please see our medium blog http://www.deeplabcut.medium.com and all resources linked on our homepage: http://DeepLabCut.org - take care #deeplabcutters 🙏🏼💜

Latent variable model with parametric tuning curves that assume a common shape shared by neurons in an ensemble, where the ensembles are defined parametrically. Uses ML tricks to make it fast (e.g. VAE-style encoder) and seems to work on data (head direction and grid cell).

Here is the article: https://arxiv.org/abs/2210.03155
Authors: Martin Bjerke, Lukas Schott, Kristopher T. Jensen , Claudia Battistin, David Klindt (@dak), and Benjamin Dunn (@benjamin_dunn)

Note: here it was assumed that cell types are a thing and that brain areas aren’t a continuous mush of selectivity.

Understanding Neural Coding on Latent Manifolds by Sharing Features and Dividing Ensembles

Systems neuroscience relies on two complementary views of neural data, characterized by single neuron tuning curves and analysis of population activity. These two perspectives combine elegantly in neural latent variable models that constrain the relationship between latent variables and neural activity, modeled by simple tuning curve functions. This has recently been demonstrated using Gaussian processes, with applications to realistic and topologically relevant latent manifolds. Those and previous models, however, missed crucial shared coding properties of neural populations. We propose feature sharing across neural tuning curves, which significantly improves performance and leads to better-behaved optimization. We also propose a solution to the problem of ensemble detection, whereby different groups of neurons, i.e., ensembles, can be modulated by different latent manifolds. This is achieved through a soft clustering of neurons during training, thus allowing for the separation of mixed neural populations in an unsupervised manner. These innovations lead to more interpretable models of neural population activity that train well and perform better even on mixtures of complex latent manifolds. Finally, we apply our method on a recently published grid cell dataset, recovering distinct ensembles, inferring toroidal latents and predicting neural tuning curves all in a single integrated modeling framework.

arXiv.org