Applications are now open for the International Neuroscience Doctoral Programme (INDP) at Champalimaud Foundation, Lisbon, Portugal.
Deadline for application: Jan 31, 2026
https://fchampalimaud.org/champalimaud-research/education/indp
The programme includes an initial year of classes + three lab rotations.
Theoretical Insights on Training Instability in Deep Learning TUTORIAL
https://uuujf.github.io/instability/
gradient flow-like regime is slow and can overfit while large (but not too large) step size can trasiently go far, converge faster, and find better solutions #optimization #NeurIPS2025
Recurrent neural networks (RNNs) trained on neuroscience-inspired tasks offer powerful models of brain computation. However, typical training paradigms rely on open-loop, supervised settings, whereas real-world learning unfolds in closed-loop environments. Here, we develop a mathematical theory describing the learning dynamics of linear RNNs trained in closed-loop contexts. We first demonstrate that two otherwise identical RNNs, trained in either closed- or open-loop modes, follow markedly different learning trajectories. To probe this divergence, we analytically characterize the closed-loop case, revealing distinct stages aligned with the evolution of the training loss. Specifically, we show that the learning dynamics of closed-loop RNNs, in contrast to open-loop ones, are governed by an interplay between two competing objectives: short-term policy improvement and long-term stability of the agent-environment interaction. Finally, we apply our framework to a realistic motor control task, highlighting its broader applicability. Taken together, our results underscore the importance of modeling closed-loop dynamics in a biologically plausible setting.
related:
Tricks to make it even faster.
Zoltowski, D. M., Wu, S., Gonzalez, X., Kozachkov, L., & Linderman, S. (2025). Parallelizing MCMC Across the Sequence Length. The Thirty-Ninth Annual Conference on Neural Information Processing Systems. https://openreview.net/forum?id=QOjUNzOkRN