"Sketch of a novel approach to a neural model", by Gabriele Scheler 2026.
https://arxiv.org/abs/2209.06865

"traditional synapse-centric, weight-based models of memorization are not sufficient or adequate to capture the real complexity of neuroplasticity. [...] We propose a paradigm switch from a synapse-centric model (each synapse learns independently, based on associative coupling) to a neuron-centric model (each neuron uses its intracellular pathways to express plasticity at its synapses and dendritic membrane)."

#neuroscience #CompNeurosci

1/2

Sketch of a novel approach to a neural model

In this position paper, we present biological detail about neuroplasticity with respect to cell-internal processing pathways and their relation to membrane and synaptic plasticity. We believe that traditional synapse-centric, weight-based models of memorization are not sufficient or adequate to capture the real complexity of neuroplasticity. In standard accounts, a neuronal network consists of a network of neurons connected by adaptive transmission links. The adaptation of these transmission links is overly simplified in the standard model of short-term and long-term potentiation or depression assuming weight adaptation according to use. We propose a paradigm switch from a synapse-centric model (each synapse learns independently, based on associative coupling) to a neuron-centric model (each neuron uses its intracellular pathways to express plasticity at its synapses and dendritic membrane). Each neuron has a 'vertical' dimension where internal parameters steer the external membrane- and synapse-expressed parameters. A neural model consists of (a) expression of parameters at the membrane, in particular dendritic synapses or spines, and axonal boutons (b) internal parameters in the sub-membrane zone and the cytoplasm with its protein signaling network and (c) core parameters in the nucleus for genetic and epigenetic information. In a neuron-centric model, each neuron in the horizontal network has its own internal memory. Transmission and memory are separate, not linked by strict use-dependence. There is filtering and selection of signals for processing and storage. Not every transmission event leaves a trace. This is a conceptual advance over synaptic weight models. The neuron is a self-programming device, rather than a transfer function determined by input. A new approach to neural modeling is better able to capture experimental evidence than synapse-centric models.

arXiv.org

The lab of Mitya Chklovskii introduces the #ReSU: Rectified Spectral Units, as a replacement for ReLU.

"A Network of Biologically Inspired Rectified Spectral Units (ReSUs) Learns Hierarchical Features Without Error Backpropagation", Qin et al. 2025
https://arxiv.org/abs/2512.23146

#neuroscience #CompNeurosci

A Network of Biologically Inspired Rectified Spectral Units (ReSUs) Learns Hierarchical Features Without Error Backpropagation

We introduce a biologically inspired, multilayer neural architecture composed of Rectified Spectral Units (ReSUs). Each ReSU projects a recent window of its input history onto a canonical direction obtained via canonical correlation analysis (CCA) of previously observed past-future input pairs, and then rectifies either its positive or negative component. By encoding canonical directions in synaptic weights and temporal filters, ReSUs implement a local, self-supervised algorithm for progressively constructing increasingly complex features. To evaluate both computational power and biological fidelity, we trained a two-layer ReSU network in a self-supervised regime on translating natural scenes. First-layer units, each driven by a single pixel, developed temporal filters resembling those of Drosophila post-photoreceptor neurons (L1/L2 and L3), including their empirically observed adaptation to signal-to-noise ratio (SNR). Second-layer units, which pooled spatially over the first layer, became direction-selective -- analogous to T4 motion-detecting cells -- with learned synaptic weight patterns approximating those derived from connectomic reconstructions. Together, these results suggest that ReSUs offer (i) a principled framework for modeling sensory circuits and (ii) a biologically grounded, backpropagation-free paradigm for constructing deep self-supervised neural networks.

arXiv.org

#Neuroscience as a field is relatively fragmented
..
We propose leveraging shared #neurodata repositories and #compneurosci modelling frameworks to benchmark methodologies, facilitating a more coherent integration of findings across #neuro subfields.

Additionally, we advocate for the creation of a structured “map” of neuroscience, charting relationships between domains to enhance conceptual clarity.

https://doi.org/10.52294/001c.138841

#philosophyofneuroscience #theoreticalneuroscience #neuropsy #cogsci

Bridging the epistemological divide in neuroscience to improve ontological clarity | Published in Aperture Neuro

By Giulia Baracchini, Eli Muller & 1 more. This perspective highlights the epistemological divide that arises from the wide variety of different experimental approaches... which in turn lead to ontological clashes in our understanding of brain function.

The lab of Mitya Chklovskii is hiring, at the Flatiron Institute – Simons Foundation, in Manhattan, New York:
https://apply.interfolio.com/173400

#PhDJobs #neuroscience #CompNeurosci

Apply - Interfolio

The lab of Jakob Macke (Tuebingen, Germany) is recruiting a research engineer to work on brain models:
https://www.mackelab.org/media/Mackelab_ResearchEngineer.pdf

#neuroscience #CompNeurosci

"Algorithmic dissection of optic flow memory in larval zebrafish", Tanaka & Portgues 2025
https://www.sciencedirect.com/science/article/pii/S0960982225011133

#neuroscience #zebrafish #CompNeurosci

The NEURON Simulator — NEURON documentation

How do babies and blind people learn to localise sound without labelled data? We propose that innate mechanisms can provide coarse-grained error signals to boostrap learning.

New preprint from @yang_chu.

https://arxiv.org/abs/2001.10605

Thread below 👇

#neuroscience #computationalneuroscience #compneuro #compneurosci

Learning spatial hearing via innate mechanisms

The acoustic cues used by humans and other animals to localise sounds are subtle, and change during and after development. This means that we need to constantly relearn or recalibrate the auditory spatial map throughout our lifetimes. This is often thought of as a "supervised" learning process where a "teacher" (for example, a parent, or your visual system) tells you whether or not you guessed the location correctly, and you use this information to update your map. However, there is not always an obvious teacher (for example in babies or blind people). Using computational models, we showed that approximate feedback from a simple innate circuit, such as that can distinguish left from right (e.g. the auditory orienting response), is sufficient to learn an accurate full-range spatial auditory map. Moreover, using this mechanism in addition to supervised learning can more robustly maintain the adaptive neural representation. We find several possible neural mechanisms that could underlie this type of learning, and hypothesise that multiple mechanisms may be present and interact with each other. We conclude that when studying spatial hearing, we should not assume that the only source of learning is from the visual system or other supervisory signal. Further study of the proposed mechanisms could allow us to design better rehabilitation programmes to accelerate relearning/recalibration of spatial maps.

arXiv.org

Latest from Kathy Nagel's lab:

"Inhibitory control explains locomotor statistics in walking Drosophila", Gattuso et al. 2025
https://www.pnas.org/doi/abs/10.1073/pnas.2407626122

"we measure and analyze trajectories evoked by attractive odor in walking Drosophila and develop a biologically plausible computational model of trajectory generation and modulation by sensory input. Our model provides a link between neural architectures and locomotor behavior and highlights the potential role of inhibition in shaping the curvature and speed of trajectories. Inspired by this model, we experimentally identify single neurons and populations that modulate either curvature or speed in the manner predicted by our model."

#neuroscience #Drosophila #locomotion #CompNeurosci #SystemsNeuroscience

"Forecasting Whole-Brain Neuronal Activity from Volumetric Video", Immer et al. 2025 (with Florian Engert, Jeff Lichtman, Misha Ahrens, Viren Jain and Michal Januszewski)
https://www.arxiv.org/abs/2503.00073

"ZAPBench: a benchmark for whole-brain activity prediction in zebrafish", Lueckmann et al. 2025
https://openreview.net/pdf?id=oCHsDpyawq

#ZAPBench #neuroscience #zebrafish #CalciumImaging #CompNeurosci

Forecasting Whole-Brain Neuronal Activity from Volumetric Video

Large-scale neuronal activity recordings with fluorescent calcium indicators are increasingly common, yielding high-resolution 2D or 3D videos. Traditional analysis pipelines reduce this data to 1D traces by segmenting regions of interest, leading to inevitable information loss. Inspired by the success of deep learning on minimally processed data in other domains, we investigate the potential of forecasting neuronal activity directly from volumetric videos. To capture long-range dependencies in high-resolution volumetric whole-brain recordings, we design a model with large receptive fields, which allow it to integrate information from distant regions within the brain. We explore the effects of pre-training and perform extensive model selection, analyzing spatio-temporal trade-offs for generating accurate forecasts. Our model outperforms trace-based forecasting approaches on ZAPBench, a recently proposed benchmark on whole-brain activity prediction in zebrafish, demonstrating the advantages of preserving the spatial structure of neuronal activity.

arXiv.org