In the same vein as my #arxivfeed thing, here's a paper that I've been reading and really enjoying. I decided to spend more time on it than I usually do when reading papers because I wanted to search for gaps in my knowledge, and I really don't regret that decision! I'm only at the 4th section at the moment and I find it very well written, especially in the framing of things. So far it's a great overview!
"Neural Field Models: A mathematical overview and unifying framework"
https://arxiv.org/abs/2103.10554v4
#Neuroscience #ComputationalNeuroscience #MathematicalNeuroscience #NeuralFieldModelling #Biophysical #DynamicalSystems

Neural Field Models: A mathematical overview and unifying framework
Mathematical modelling of the macroscopic electrical activity of the brain is
highly non-trivial and requires a detailed understanding of not only the
associated mathematical techniques, but also the underlying physiology and
anatomy. Neural field theory is a population-level approach to modelling the
non-linear dynamics of large populations of neurons, while maintaining a degree
of mathematical tractability. This class of models provides a solid theoretical
perspective on fundamental processes of neural tissue such as state transitions
between different brain activities as observed during epilepsy or sleep.
Various anatomical, physiological, and mathematical assumptions are essential
for deriving a minimal set of equations that strike a balance between
biophysical realism and mathematical tractability. However, these assumptions
are not always made explicit throughout the literature. Even though neural
field models (NFMs) first appeared in the literature in the early 1970's, the
relationships between them have not been systematically addressed. This may
partially be explained by the fact that the inter-dependencies between these
models are often implicit and non-trivial. Herein we provide a review of key
stages of the history and development of neural field theory and contemporary
uses of this branch of mathematical neuroscience. First, the principles of the
theory are summarised throughout a discussion of the pioneering models by
Wilson and Cowan, Amari and Nunez. Upon thorough review of these models, we
then present a unified mathematical framework in which all neural field models
can be derived by applying different assumptions. We then use this framework to
i) derive contemporary models by Robinson, Jansen and Rit, Wendling, Liley, and
Steyn-Ross, and ii) make explicit the many significant inherited assumptions
that exist in the current literature.
arXiv.org
JGAT: a joint spatio-temporal graph attention model for brain decoding
The decoding of brain neural networks has been an intriguing topic in
neuroscience for a well-rounded understanding of different types of brain
disorders and cognitive stimuli. Integrating different types of connectivity,
e.g., Functional Connectivity (FC) and Structural Connectivity (SC), from
multi-modal imaging techniques can take their complementary information into
account and therefore have the potential to get better decoding capability.
However, traditional approaches for integrating FC and SC overlook the
dynamical variations, which stand a great chance to over-generalize the brain
neural network. In this paper, we propose a Joint kernel Graph Attention
Network (JGAT), which is a new multi-modal temporal graph attention network
framework. It integrates the data from functional Magnetic Resonance Images
(fMRI) and Diffusion Weighted Imaging (DWI) while preserving the dynamic
information at the same time. We conduct brain-decoding tasks with our JGAT on
four independent datasets: three of 7T fMRI datasets from the Human Connectome
Project (HCP) and one from animal neural recordings. Furthermore, with
Attention Scores (AS) and Frame Scores (FS) computed and learned from the
model, we can locate several informative temporal segments and build meaningful
dynamical pathways along the temporal domain for the HCP datasets. The URL to
the code of JGAT model: https://github.com/BRAINML-GT/JGAT.
arXiv.org
Implicit Transfer Operator Learning: Multiple Time-Resolution Surrogates for Molecular Dynamics
Computing properties of molecular systems rely on estimating expectations of the (unnormalized) Boltzmann distribution. Molecular dynamics (MD) is a broadly adopted technique to approximate such quantities. However, stable simulations rely on very small integration time-steps ($10^{-15}\,\mathrm{s}$), whereas convergence of some moments, e.g. binding free energy or rates, might rely on sampling processes on time-scales as long as $10^{-1}\, \mathrm{s}$, and these simulations must be repeated for every molecular system independently. Here, we present Implict Transfer Operator (ITO) Learning, a framework to learn surrogates of the simulation process with multiple time-resolutions. We implement ITO with denoising diffusion probabilistic models with a new SE(3) equivariant architecture and show the resulting models can generate self-consistent stochastic dynamics across multiple time-scales, even when the system is only partially observed. Finally, we present a coarse-grained CG-SE3-ITO model which can quantitatively model all-atom molecular dynamics using only coarse molecular representations. As such, ITO provides an important step towards multiple time- and space-resolution acceleration of MD. Code is available at \href{https://github.com/olsson-group/ito}{https://github.com/olsson-group/ito}.
arXiv.org
Learning low-dimensional dynamics from whole-brain data improves task capture
The neural dynamics underlying brain activity are critical to understanding
cognitive processes and mental disorders. However, current voxel-based
whole-brain dimensionality reduction techniques fall short of capturing these
dynamics, producing latent timeseries that inadequately relate to behavioral
tasks. To address this issue, we introduce a novel approach to learning
low-dimensional approximations of neural dynamics by using a sequential
variational autoencoder (SVAE) that represents the latent dynamical system via
a neural ordinary differential equation (NODE). Importantly, our method finds
smooth dynamics that can predict cognitive processes with accuracy higher than
classical methods. Our method also shows improved spatial localization to
task-relevant brain regions and identifies well-known structures such as the
motor homunculus from fMRI motor task recordings. We also find that non-linear
projections to the latent space enhance performance for specific tasks,
offering a promising direction for future research. We evaluate our approach on
various task-fMRI datasets, including motor, working memory, and relational
processing tasks, and demonstrate that it outperforms widely used
dimensionality reduction techniques in how well the latent timeseries relates
to behavioral sub-tasks, such as left-hand or right-hand tapping. Additionally,
we replace the NODE with a recurrent neural network (RNN) and compare the two
approaches to understand the importance of explicitly learning a dynamical
system. Lastly, we analyze the robustness of the learned dynamical systems
themselves and find that their fixed points are robust across seeds,
highlighting our method's potential for the analysis of cognitive processes as
dynamical systems.
arXiv.org
Probabilistic Exponential Integrators
Probabilistic solvers provide a flexible and efficient framework for simulation, uncertainty quantification, and inference in dynamical systems. However, like standard solvers, they suffer performance penalties for certain stiff systems, where small steps are required not for reasons of numerical accuracy but for the sake of stability. This issue is greatly alleviated in semi-linear problems by the probabilistic exponential integrators developed in this paper. By including the fast, linear dynamics in the prior, we arrive at a class of probabilistic integrators with favorable properties. Namely, they are proven to be L-stable, and in a certain case reduce to a classic exponential integrator -- with the added benefit of providing a probabilistic account of the numerical error. The method is also generalized to arbitrary non-linear systems by imposing piece-wise semi-linearity on the prior via Jacobians of the vector field at the previous estimates, resulting in probabilistic exponential Rosenbrock methods. We evaluate the proposed methods on multiple stiff differential equations and demonstrate their improved stability and efficiency over established probabilistic solvers. The present contribution thus expands the range of problems that can be effectively tackled within probabilistic numerics.
arXiv.orgAfter a long break, new #arxivfeed
"Generalized Bayesian Inference for Scientific Simulators via Amortized Cost Estimation"
https://arxiv.org/abs/2305.15208
#BayesianInference #MachineLearning #Modelling #SBI #ComputationalNeuroscience

Generalized Bayesian Inference for Scientific Simulators via Amortized Cost Estimation
Simulation-based inference (SBI) enables amortized Bayesian inference for simulators with implicit likelihoods. But when we are primarily interested in the quality of predictive simulations, or when the model cannot exactly reproduce the observed data (i.e., is misspecified), targeting the Bayesian posterior may be overly restrictive. Generalized Bayesian Inference (GBI) aims to robustify inference for (misspecified) simulator models, replacing the likelihood-function with a cost function that evaluates the goodness of parameters relative to data. However, GBI methods generally require running multiple simulations to estimate the cost function at each parameter value during inference, making the approach computationally infeasible for even moderately complex simulators. Here, we propose amortized cost estimation (ACE) for GBI to address this challenge: We train a neural network to approximate the cost function, which we define as the expected distance between simulations produced by a parameter and observed data. The trained network can then be used with MCMC to infer GBI posteriors for any observation without running additional simulations. We show that, on several benchmark tasks, ACE accurately predicts cost and provides predictive simulations that are closer to synthetic observations than other SBI methods, especially for misspecified simulators. Finally, we apply ACE to infer parameters of the Hodgkin-Huxley model given real intracellular recordings from the Allen Cell Types Database. ACE identifies better data-matching parameters while being an order of magnitude more simulation-efficient than a standard SBI method. In summary, ACE combines the strengths of SBI methods and GBI to perform robust and simulation-amortized inference for scientific simulators.
arXiv.org#arxivfeed
"Structural and neurophysiological alterations in Parkinson's disease are aligned with cortical neurochemical systems"
https://www.medrxiv.org/content/10.1101/2023.04.04.23288137v1
#Neuroscience #Multimodal #Neuroimaging #MEG #Neurochemical #Parkinsons
Incredible work, as always!