@memming

5 Followers
10 Following
28 Posts
Group Leader at Champalimaud Centre for the Unknown
How can I accelerate breakdown of caffeine in my body? I will need to increase CYP1A2 (P450) activity (without smoking). Vigorous exercise over 30 days was shown to increase it up to 70%? https://pubmed.ncbi.nlm.nih.gov/1394840/
Learning a lot while preparing for a lecture on RNNs for neuroscience.
The research labs you can join through INDP range from systems neuroscience, computational neuroscience and clinical neuroscience, to neocybernetics, neuroethology, and natural intelligence.
To learn more about the culture and value, check out: https://www.fchampalimaud.org/about-cr
About CR | Champalimaud Foundation

We seek motivated applicants from all areas of neuroscience, as well as physics, math, computer science, electrical/biomedical engineering, and related quantitative backgrounds. English is the working language. It's an American-style graduate program in Europe.

Applications are now open for the International Neuroscience Doctoral Programme (INDP) at Champalimaud Foundation, Lisbon, Portugal.

Deadline for application: Jan 31, 2026

https://fchampalimaud.org/champalimaud-research/education/indp

The programme includes an initial year of classes + three lab rotations.

International Neuroscience Doctoral Programme | Champalimaud Foundation

The call for the 2025 Champalimaud Foundation's International PhD in Neuroscience will open in the first week of December 2024.

One advantage of monosemantic, sharply-tuned, grandmother-cell, axis-aligned, neuron-centric representation as opposed to polysemantic, mixed-selective, oblique population code is that it can benefit from evolution. Genes are good at operating at the cell level. #neuroscience

Theoretical Insights on Training Instability in Deep Learning TUTORIAL
https://uuujf.github.io/instability/

gradient flow-like regime is slow and can overfit while large (but not too large) step size can trasiently go far, converge faster, and find better solutions #optimization #NeurIPS2025

Training Instability

score/flow matching diffusion models only starts memorizing when trained for long enough
Bonnaire, T., Urfin, R., Biroli, G., & Mezard, M. (2025). Why Diffusion Models Don’t Memorize: The Role of Implicit Dynamical Regularization in Training. https://openreview.net/forum?id=BSZqpqgqM0
Why Diffusion Models Don’t Memorize: The Role of Implicit...

Diffusion models have achieved remarkable success across a wide range of generative tasks. A key challenge is understanding the mechanisms that prevent their memorization of training data and allow...

analysis of coupled dynamical system to study learning #cybernetics #learningdynamics
Ger, Y., & Barak, O. (2025). Learning dynamics of RNNs in closed-loop environments. In arXiv [cs.LG]. arXiv. http://arxiv.org/abs/2505.13567
Learning Dynamics of RNNs in Closed-Loop Environments

Recurrent neural networks (RNNs) trained on neuroscience-inspired tasks offer powerful models of brain computation. However, typical training paradigms rely on open-loop, supervised settings, whereas real-world learning unfolds in closed-loop environments. Here, we develop a mathematical theory describing the learning dynamics of linear RNNs trained in closed-loop contexts. We first demonstrate that two otherwise identical RNNs, trained in either closed- or open-loop modes, follow markedly different learning trajectories. To probe this divergence, we analytically characterize the closed-loop case, revealing distinct stages aligned with the evolution of the training loss. Specifically, we show that the learning dynamics of closed-loop RNNs, in contrast to open-loop ones, are governed by an interplay between two competing objectives: short-term policy improvement and long-term stability of the agent-environment interaction. Finally, we apply our framework to a realistic motor control task, highlighting its broader applicability. Taken together, our results underscore the importance of modeling closed-loop dynamics in a biologically plausible setting.

arXiv.org

related:

Tricks to make it even faster.
Zoltowski, D. M., Wu, S., Gonzalez, X., Kozachkov, L., & Linderman, S. (2025). Parallelizing MCMC Across the Sequence Length. The Thirty-Ninth Annual Conference on Neural Information Processing Systems. https://openreview.net/forum?id=QOjUNzOkRN

Parallelizing MCMC Across the Sequence Length

Markov chain Monte Carlo (MCMC) methods are foundational algorithms for Bayesian inference and probabilistic modeling. However, most MCMC algorithms are inherently sequential and their time...