Laurent Perrinet

@laurentperrinet@neuromatch.social
636 Followers
632 Following
2.2K Posts
I am a computational neuroscientist building spiking neural network models of low-level vision, perception and action. Currently at the “Institut de Neurosciences de la Timone” (Marseille, France), a joint research unit (CNRS / Aix-Marseille Université).
Searchable via https://www.tootfinder.ch/
WebSitehttps://laurentperrinet.github.io
OrcIDhttp://orcid.org/0000-0002-9536-010X
GitHubhttps://github.com/laurentperrinet
NeuroTreehttps://neurotree.org/neurotree/peopleinfo.php?pid=18540
stackoverflowhttps://stackoverflow.com/users/234547/meduz
Publonshttps://publons.com/a/1206845/
Google Scholarhttps://scholar.google.com/citations?user=TVyUV38AAAAJ
instagramhttps://www.instagram.com/laurentperrinet/
Wikipediahttps://en.wikipedia.org/wiki/User:LaurentPerrinet
PronounsHe/Him
Pixelfedhttps://pixelfed.social/i/web/profile/505657488394461667
Lemmyhttps://lemmy.ml/u/laurentperrinet
Graphicshttps://graphics.social/@meduz
Stravahttps://www.strava.com/athletes/26726190

#generativeAI experiment : for the page of software package, I am trying to generate a woman computer vision scientist in a superman costume...

I tried different ways to force the generated images to depict a woman, but with no success : I only get the same male scientist!

#generativeart #generativeAI #bias #diversity

model = gemma3 / mflux / pinokio / exact prompt: "Ultra High Resolution low angle shot photo of a 50-year-old female computer vision woman scientist -be sure it is a female scientist- with rectangular, wire-rimmed glasses, a slight five o'clock shadow, and a determined expression, wearing a meticulously crafted Superman costume – the suit is slightly rumpled, hinting at a quick change, with the 'S' shield subtly askew, the red panties visibly worn on top of the costume and which should be very visible from that low angle shot, and a faint outline of a light blue, collared shirt visible at the neck – dynamically posed mid-jump, arms outstretched, legs powerfully propelling him forward, against a backdrop of a bustling New York City rooftop at golden hour, jagged skyscrapers piercing a dramatic, cloud-streaked sky of fiery oranges and deep purples, the setting sun casting long, dramatic shadows and a warm, golden glow on the scene – captured with a Nikon D850 and a 24-70mm lens at f/2.8, ISO 200, and a fast shutter speed of 1/2000 to freeze the action, emphasizing the power and energy of the jump – the scene evokes a sense of playful determination and unexpected heroism, inspired by the cinematic style of Gregory Crewdson, with a focus on realistic lighting, detailed textures, and a slightly unsettling yet captivating atmosphere – the air is filled with a gentle breeze ruffling the red cape and graying hair, creating a sense of movement and energy, dust motes catching the golden light – subtle reflections of the city lights gleam on the polished surface of the buildings and the scientist’s glasses – award-winning, epic composition, ultra detailed"

The remarkable energy efficiency of the Human brain: One #Spike Every 6 Seconds !

In the groundbreaking paper "The Cost of Cortical Computation" published in 2003 in Current Biology, neuroscientist Peter Lennie reached a stunning conclusion about neural activity in the human brain: the average firing rate of cortical neurons is approximately 0.16 Hz—equivalent to just one spike every 6 seconds.

This finding challenges conventional assumptions about neural activity and reveals the extraordinary energy efficiency of the brain's computational strategy. Unconventional? Ask a LLM about it, and it will rather point to a baseline frequency between 0.1Hz and 10Hz. Pretty high and vague, right? But how did Lennie arrive at this remarkable figure?

The Calculation Behind the 0.16 Hz Baseline Rate

Lennie's analysis combines several critical factors:

1. Energy Constraints Analysis

Starting with the brain's known energy consumption (approximately 20% of the body's entire energy budget despite being only 2% of body weight), Lennie worked backward to determine how many action potentials this energy could reasonably support.

2. Precise Metabolic Costs

His calculations incorporated detailed metabolic requirements:

  • Each action potential consumes approximately 3.84 × 109 ATP molecules
  • The human brain uses about 5.7 × 1021 ATP molecules daily

3. Neural Architecture

The analysis factored in essential neuroanatomical data:

  • The human cerebral cortex contains roughly 1010 neurons
  • Each neuron forms approximately 104 synaptic connections

4. Metabolic Distribution

Using cerebral glucose utilization measurements from PET studies, Lennie accounted for energy allocation across different neural processes:

  • Maintaining resting membrane potentials
  • Generating action potentials
  • Powering synaptic transmission

By synthesizing these factors and dividing the available energy budget by the number of neurons and the energy cost per spike, Lennie calculated that cortical neurons can only sustain an average firing rate of approximately 0.16 Hz while remaining within the brain's metabolic constraints.

Implications for Neural Coding

This extremely low firing rate has profound implications for our understanding of neural computation. It suggests that:

  • Neural coding must be remarkably sparse — information in the brain is likely represented by the activity of relatively few neurons at any given moment
  • Energy efficiency has shaped brain evolution — metabolic constraints have driven the development of computational strategies that maximize information processing while minimizing energy use
  • Low baseline rates enable selective amplification — this sparse background activity creates a context where meaningful signals can be effectively amplified
  • The brain's solution to energy constraints reveals an elegant approach to computation: doing more with less through strategic sparsity rather than constant activity.

    This perspective on neural efficiency continues to influence our understanding of brain function and inspires energy-efficient approaches to #ArtificialNeuralNetworks and #neuromorphic computing.

    Science can be lots of fun, especially when you design novel stimuli to challenge the visual system. Take a breath and get into the tunnel !

    Here's an infinite tunnel where you move along the axis, with some perturbations to the center of your gaze with respect to the focus of expansion. It's serious though! Think of having to orient yourself with that optic flow and how difficult life may be without this feat!

    Code : https://laurentperrinet.github.io/sciblog/posts/2025-04-24-orienting-yourself-in-the-visual-flow.html

    #vision #neuroscience #opticflow

    Orienting yourself in the visual flow

    Moving through the world depends on our ability to perceive and interpret visual information, with optic flow playing a crucial role. Optic flow provides essential cues for self-motion and navigation,

    Scientific logbook

    Check-out our poster at #cosyne2025 : "Robust Unsupervised Learning of Spike Patterns with Optimal Transport Theory" with Antoine Grimaldi, Matthieu Gilson, myself, @artipago.bsky.social Boris Sotomayor-Gomez, and Martin Vinck

    https://laurentperrinet.github.io/publication/grimaldi-25-cosyne/

    Robust Unsupervised Learning of Spike Patterns with Optimal Transport Theory | Next-generation neural computations

    Temporal sequences are an important feature of neural information processing in biology. Neurons can fire a spike with millisecond precision, and, at the network level, repetitions of spatiotemporal spike patterns are observed in neurobiological data. However, methods for detecting precise temporal patterns in neural activity suffer from high computational complexity and poor robustness to noise, and quantitative detection of these repetitive patterns remains an open problem. Here, we propose a new method to extract spike patterns embedded in raster plots using a 1D convolutional autoencoder with the Earth Mover’s Distance (EMD) as a loss function. Importantly, the properties of the EMD make the method suitable for spike-based distributions, easy to compute, and robust to noise. Through gradient descent, the autoencoder is trained to minimize the EMD between the input and its reconstruction. We then expect the weight matrices to learn the repeating spike patterns present in the data. We validate our method on synthetically generated raster plots and compare its performance with an autoencoder trained using the Mean Squared Error (MSE) as a loss function. We show that the method using the EMD performs better at detecting the occurrence of the spike patterns, while the method using the MSE is better at capturing the underlying distributions used to generate the spikes. Finally, we propose to train the autoencoder iteratively by sequentially combining the EMD and the MSE losses. This sequential approach outperforms the widely used seqNMF method in terms of robustness to various types of noise, speed and stability. Overall, our method provides a novel approach to reliably extract repetitive temporal spike sequences, and can be readily generalized to other sequence detection applications.

    Next-generation neural computations

    Talking today at the NeuroMathématiques seminar at College de France :

    "When Cortical Neurons Talk Sideways: Beyond Feedforward Visual Processing"

    Come for #diversity in connectivities #equity using predictive processing and #inclusion of #neuro and #math

    https://laurentperrinet.github.io/talk/2025-02-11-neuromath/

    #neuroAI

    When Cortical Neurons Talk Sideways: Beyond Feedforward Visual Processing | Next-generation neural computations

    In this seminar we will challenge the traditional understanding of neuronal connectivity in primary visual cortex. While current theory suggests that neurons connect preferentially to others with similar orientation preferences, I will present evidence for a more complex connectivity pattern based on a distance-dependent rule: short-range connections show a like-to-like bias, while long-range connections connect more widely. This revised model better explains how the visual cortex processes complex stimuli and accounts for observed variations in neuronal interactions at different scales.

    Next-generation neural computations
    🔥I’ve just done my #eXit! the X exodus is massive. Don't lose any of your followers. Thanks to #HelloQuitX I've registered 1131 new passengers for a journey to #BlueSky & #Mastodon. Join us on https://app.helloquitx.com and automatically find your communities on #January20!
    HelloQuitteX

    Libérez vos espaces numériques

    Vous rappelez vous qu'on n'y croyait pas ? Heureusement, une initiative coordonnée et positive va nous permettre d'agir : https://www.helloquitx.com/ Le #20Janvier jour de l'investiture de Donald Trump, Quittons X !
    HelloQuitX - Let's Quit X together]

    🧠 Exploring secrets of human vision today at #McGill University! I'll be talking about how our brains achieve efficient visual processing through foveated retinotopy - nature's brilliant solution for high-res central vision.

    👉 When: Wednesday 9th of January 2025 at 12 noon.

    👉 Where: CRN seminar room, Montreal General Hospital, Livingston Hall, L7-140, with hybrid option.

    with Jean-Nicolas JÉRÉMIE and Emmanuel Daucé

    📄 Read our findings: https://arxiv.org/abs/2402.15480

    TL;DR: Standard #CNNs naturally mimic human-like visual processing when fed images that match our retina's center-focused mapping. Could this be the key to more efficient AI vision systems?

    #ComputationalNeuroscience

    #NeuroAI

    https://laurentperrinet.github.io/talk/2025-01-08-brain-seminar/

    Foveated Retinotopy Improves Classification and Localization in CNNs

    From a falcon detecting prey to humans recognizing faces, many species exhibit extraordinary abilities in rapid visual localization and classification. These are made possible by a specialized retinal region called the fovea, which provides high acuity at the center of vision while maintaining lower resolution in the periphery. This distinctive spatial organization, preserved along the early visual pathway through retinotopic mapping, is fundamental to biological vision, yet remains largely unexplored in machine learning. Our study investigates how incorporating foveated retinotopy may benefit deep convolutional neural networks (CNNs) in image classification tasks. By implementing a foveated retinotopic transformation in the input layer of standard ResNet models and re-training them, we maintain comparable classification accuracy while enhancing the network's robustness to scale and rotational perturbations. Although this architectural modification introduces increased sensitivity to fixation point shifts, we demonstrate how this apparent limitation becomes advantageous: variations in classification probabilities across different gaze positions serve as effective indicators for object localization. Our findings suggest that foveated retinotopic mapping encodes implicit knowledge about visual object geometry, offering an efficient solution to the visual search problem - a capability crucial for many living species.

    arXiv.org

    #ConvolutionalNeuralNetworks (#CNNs in short) are immensely useful for many #imageProcessing tasks and much more...

    Yet you sometimes encounter some bits of code with little explanation. Have you ever wondered about the origins of the values for image normalization in #imagenet ?

    • Mean: [0.485, 0.456, 0.406] (for R, G and B channels respectively)
    • Std: [0.229, 0.224, 0.225]

    Strangest to me is the need for a three-digits precision. Here, after finding the origin of these numbers for MNIST and ImageNet, I am testing if that precision is really important : guess what, it is not (so much) !

    👉 if interested in more details, check-out https://laurentperrinet.github.io/sciblog/posts/2024-12-09-normalizing-images-in-convolutional-neural-networks.html

    Understanding Image Normalization in CNNs

    Architectural innovations in deep learning occur at a breakneck pace, yet fragments of legacy code often persist, carrying assumptions and practices whose necessity remains unquestioned. Practitioners

    Scientific logbook
    I should act to get my #screentime under control#iosfail