IRCN and Chen Institute Joint Course on Neuro-inspired Computation🧠
https://ircn.jp/en/events/07_ircncourse2025
Theoretical neuroscientist, interested in neural dynamics, neural coding, and data mining.
Assistant Professor at the BioRobotics Institute, Sant’Anna School of Advanced Studies.
PI of the Brain Dynamics Laboratory.
@russoel.bsky.social
| Lab's website | https://www.santannapisa.it/it/istituto/biorobotica/brain-dynamics-laboratory |
📢 New work on the Integration of rate and phase codes by hippocampal cell assemblies to support the encoding of spatiotemporal context
https://www.nature.com/articles/s41467-024-52988-x
It was a pleasure to work with Nadine Becker, Aleks P. F. Domanski, Timothy Howe, Kipp Freud, @DurstewitzLab, and Matt Jones.
@SantAnnaPisa @BristolNeurosci @zi_mannheim @BernsteinNetwork
#Neuroscience #Hippocampus #ThetaSequences #PlaceCells
Russo et al. show that context-specific place cell assemblies support hippocampal integration of past experiences into future plans during goal-directed behavior and propose a biophysical mechanism behind the formation of goal dependent theta sequences.
New work on the neural encoding of social recognition and the role played by oxytocin in it. It was a great pleasure to collaborate with David Wolf, Jonathan Reinwald, Renée Hartig, Wolfgang Kelsch and colleagues!
Recognition memory for other individuals forms quickly. Here the authors show that such memories are enabled by oxytocin and can be retrieved from reinforced and more distinct neural representations even when only limited sensory information is available.
Interested in investigating how neuronal dynamics supports reinforcement learning?
Join us! 📢 PhD position in the Brain Dynamics Laboratory at @SantAnnaPisa, Italy.
#RNN #ReinforcementLearning #computationalneuroscience
Deadline: at 13:00 of the 29 July 2024.
https://www.unicam.it/bandi/2024/bando-n0046958-del-28062024
Project description under ‘Code 4.2’:
https://www.unicam.it/sites/default/files/bandi/2024/06/ANNEX%201_TAN_ciclo%20XL_0.pdf
The PhD position will be at the Brain Dynamics Laboratory @SantAnnaPisa:
https://santannapisa.it/it/istituto/biorobotica/brain-dynamics-laboratory
in collaboration with the Kelsch lab, @uni_mainz, Germany, with the opportunity to spend a period abroad
https://www.kelschlab.com/
BANDO DI CONCORSO PER L'AMMISSIONE AL DOTTORATO DI RICERCA NAZIONALE IN THEORETICAL AND APPLIED NEUROSCIENCE (XL CICLO a.a. 2024/2025) sede amministrativa School of Advanced Studies - Scuola Internazionale di Dottorato, Università di Camerino - NOTICE OF SELECTION PROCEDURE FOR THE ADMISSION TO PhD PROGRAMME IN THEORETICAL AND APPLIED NEUROSCIENCE (CYCLE 40, A. Y. 2024/2025) headquarters at the School of Advanced Studies - International Doctoral School University of Camerino
Non-contrastive SSL methods like BYOL and SimSiam rely on asymmetric predictor networks to avoid representational collapse without negative samples. Yet, how predictor networks facilitate stable learning is not fully understood. While previous theoretical analyses assumed Euclidean losses, most practical implementations rely on cosine similarity. To gain further theoretical insight into non-contrastive SSL, we analytically study learning dynamics in conjunction with Euclidean and cosine similarity in the eigenspace of closed-form linear predictor networks. We show that both avoid collapse through implicit variance regularization albeit through different dynamical mechanisms. Moreover, we find that the eigenvalues act as effective learning rate multipliers and propose a family of isotropic loss functions (IsoLoss) that equalize convergence rates across eigenmodes. Empirically, IsoLoss speeds up the initial learning dynamics and increases robustness, thereby allowing us to dispense with the EMA target network typically used with non-contrastive methods. Our analysis sheds light on the variance regularization mechanisms of non-contrastive SSL and lays the theoretical grounds for crafting novel loss functions that shape the learning dynamics of the predictor's spectrum.
How to analyze computations & dynamical mechanisms of RNNs?
Our #NeurIPS2023 spotlight presents a highly efficient algorithm for locating all fixed points, cycles, and bifurcation manifolds in RNNs: https://arxiv.org/abs/2310.17561
We further math. prove that bifurcations cause exploding or vanishing gradients in RNN training, often leading to abrupt loss jumps, and that the recently introduced technique of generalized teacher forcing (https://proceedings.mlr.press/v202/hess23a.html) largely ameliorates this problem.
Recurrent neural networks (RNNs) are popular machine learning tools for modeling and forecasting sequential data and for inferring dynamical systems (DS) from observed time series. Concepts from DS theory (DST) have variously been used to further our understanding of both, how trained RNNs solve complex tasks, and the training process itself. Bifurcations are particularly important phenomena in DS, including RNNs, that refer to topological (qualitative) changes in a system's dynamical behavior as one or more of its parameters are varied. Knowing the bifurcation structure of an RNN will thus allow to deduce many of its computational and dynamical properties, like its sensitivity to parameter variations or its behavior during training. In particular, bifurcations may account for sudden loss jumps observed in RNN training that could severely impede the training process. Here we first mathematically prove for a particular class of ReLU-based RNNs that certain bifurcations are indeed associated with loss gradients tending toward infinity or zero. We then introduce a novel heuristic algorithm for detecting all fixed points and k-cycles in ReLU-based RNNs and their existence and stability regions, hence bifurcation manifolds in parameter space. In contrast to previous numerical algorithms for finding fixed points and common continuation methods, our algorithm provides exact results and returns fixed points and cycles up to high orders with surprisingly good scaling behavior. We exemplify the algorithm on the analysis of the training process of RNNs, and find that the recently introduced technique of generalized teacher forcing completely avoids certain types of bifurcations in training. Thus, besides facilitating the DST analysis of trained RNNs, our algorithm provides a powerful instrument for analyzing the training process itself.
Our Perspective on reconstructing computational system dynamics from neural data finally out in Nature Rev Neurosci!
https://www.nature.com/articles/s41583-023-00740-7
We survey generative models that can be trained on time series data to mimic the behavior of the underlying neural substrate.
The prospects for applying dynamical systems theory in neuroscience are changing dramatically. In this Perspective, Durstewitz et al. discuss dynamical system reconstruction using recurrent neural networks to directly infer a formal surrogate from an experimentally probed system and consider its potential for revolutionizing neuroscience.
This interview to Karikó ought to be mandatory reading for any scientist at any stage of their career.
And more child day care facilities please, at every academic institution. And less focus on papers and more on addressing research questions that advance our collective understanding. And more collaboration and less competition. I endorse every single statement. What a wonderful scientist and human being.
Scientist Katalin Karikó’s work didn’t get the attention it deserved until the start of the pandemic in 2020, when suddenly her area of expertise, mRNA, became the most important subject of research worldwide. From one day to the next, Karikó became the star of the scientific community. Today she looks back on why her research was not funded, advocates for new role models and explains why she didn`t give up her career for her family.