🌙Lune P Bellec

@pierre_bellec@neuromatch.social
613 Followers
432 Following
264 Posts
🏳️‍🌈 🏳️‍⚧️ 💜 Cursed queen of looming deadlines. Scientist breeding 🤖 with 🧠 data.
SIMEXP labhttps://simexp.github.io
Githubhttps://github.com/pbellec
Courtois NeuroModhttps://cneuromod.ca
BrainHackhttps://brainhack.org
Google Scholarhttps://scholar.google.com/citations?user=Yz8WY8YAAAAJ&hl=en

Join Us for the first @cneuromod workshop featuring the Algonauts brain encoding competition

Curious about creating models of how the brain processes video, sound, and language? Join us for our first CNeuroMod workshop where we’ll introduce a unique collection of brain data and rich stimuli (https://cneuromod.ca)—and show you how researchers can create AI models that encode brain activity from video data.

We’ll also dive into the Algonauts competition (https://algonautsproject.com/), where teams from around the world challenge each other to build the best AI models that mimic brain activity while participants are watching videos. This session is designed for a multidisciplinary audience —from psychologists to computer scientists—and will give you practical insights into the exciting intersection of brain science and artificial intelligence.

This special session is co-organized by the Unique Network (Union of Neuroscience and AI Québec) and will be held in a hybrid format (Zoom or in-person).

Date & Time: June 10th, 10 am - 12 pm
Location: Room M6809, CRIUGM, 4545 chemin Queen Mary, Montreal, H3W 1W4
Registration: Free, but mandatory, using https://www.eventbrite.com/e/cneuromod-workshop-tickets-1397224398789?aff=oddtdtcreator

Home

The CNeuroMod project

new brain foundation model study, this time with a graph neural net architecture and fMRI, looking at a range of disorders and downstream tasks. https://arxiv.org/abs/2506.02044v1 #neuroAI
A Brain Graph Foundation Model: Pre-Training and Prompt-Tuning for Any Atlas and Disorder

As large language models (LLMs) continue to revolutionize AI research, there is a growing interest in building large-scale brain foundation models to advance neuroscience. While most existing brain foundation models are pre-trained on time-series signals or region-of-interest (ROI) features, we propose a novel graph-based pre-training paradigm for constructing a brain graph foundation model. In this paper, we introduce the Brain Graph Foundation Model, termed BrainGFM, a unified framework that leverages graph contrastive learning and graph masked autoencoders for large-scale fMRI-based pre-training. BrainGFM is pre-trained on a diverse mixture of brain atlases with varying parcellations, significantly expanding the pre-training corpus and enhancing the model's ability to generalize across heterogeneous fMRI-derived brain representations. To support efficient and versatile downstream transfer, we integrate both graph prompts and language prompts into the model design, enabling BrainGFM to flexibly adapt to a wide range of atlases, neurological and psychiatric disorders, and task settings. Furthermore, we employ meta-learning to optimize the graph prompts, facilitating strong generalization to previously unseen disorders under both few-shot and zero-shot learning conditions via language-guided prompting. BrainGFM is pre-trained on 27 neuroimaging datasets spanning 25 common neurological and psychiatric disorders, encompassing 2 types of brain atlases (functional and anatomical) across 8 widely-used parcellations, and covering over 25,000 subjects, 60,000 fMRI scans, and a total of 400,000 graph samples aggregated across all atlases and parcellations. The code is available at: https://github.com/weixinxu666/BrainGFM

arXiv.org
#neuroAI preprint alert: studying the convergence of multimodal AI features with brain activity during movie watching. Notably identifies brain areas where multimodal features outperform unimodal stimuli representations. And it uses @cneuromod.ca 's movie10 dataset :) https://arxiv.org/abs/2505.20027
Multi-modal brain encoding models for multi-modal stimuli

Despite participants engaging in unimodal stimuli, such as watching images or silent videos, recent work has demonstrated that multi-modal Transformer models can predict visual brain activity impressively well, even with incongruent modality representations. This raises the question of how accurately these multi-modal models can predict brain activity when participants are engaged in multi-modal stimuli. As these models grow increasingly popular, their use in studying neural activity provides insights into how our brains respond to such multi-modal naturalistic stimuli, i.e., where it separates and integrates information across modalities through a hierarchy of early sensory regions to higher cognition. We investigate this question by using multiple unimodal and two types of multi-modal models-cross-modal and jointly pretrained-to determine which type of model is more relevant to fMRI brain activity when participants are engaged in watching movies. We observe that both types of multi-modal models show improved alignment in several language and visual regions. This study also helps in identifying which brain regions process unimodal versus multi-modal information. We further investigate the contribution of each modality to multi-modal alignment by carefully removing unimodal features one by one from multi-modal representations, and find that there is additional information beyond the unimodal embeddings that is processed in the visual and language regions. Based on this investigation, we find that while for cross-modal models, their brain alignment is partially attributed to the video modality; for jointly pretrained models, it is partially attributed to both the video and audio modalities. This serves as a strong motivation for the neuroscience community to investigate the interpretability of these models for deepening our understanding of multi-modal information processing in brain.

arXiv.org

Please note: my main professional email address is now lune.bellec@umontreal.ca — my previous address has been deactivated.

This may just be an email address, but it’s one that brings me a lot of joy.

Many thanks to @umontreal for making the transition smooth (once I actually read the instructions properly 😅).

🔍 Can one hundred scans be linked to hearing loss? The case of the Courtois NeuroMod project

For over five years, the Courtois NeuroMod project scanned six participants weekly using fMRI — creating the largest individual-subject fMRI dataset ever collected.

MRI machines are loud, and participants wore MRI-compatible Sensimetrics earphones with foam inserts and additional custom over-the-ear protection. We still remained vigilant about potential impacts on auditory health.

📣 A new study, led by Eddie Fortier under the supervision of Adrian Fuente, and now published in PLOS ONE, presents the results of an auditory monitoring protocol conducted in parallel with CNeuroMod:
🔗 https://lnkd.in/eRinvsDH
Key Findings:

Across participants, we found no clinical signs of ear trauma immediately following scanning. Changes in detection thresholds were typically <10 dB, even in high-frequency ranges (>10 kHz) where variability was greatest.

One participant with pre-existing unilateral hearing loss was tested across five sessions. Their results were inconsistent — and in some cases, paradoxically showed improved sensitivity post-scan — likely due to test-retest variability and fatigue effects in the upper frequency range.

In long-term follow-up (up to 16 months delay), we observed no sustained hearing loss. While high-frequency measures remained variable, no clinically significant, consistent declines were found across the group.

🎧 While pure tone audiometry is a cognitively demanding test — especially following extended scanning sessions — our findings are reassuring: with proper hearing protection, even repeated, long-duration fMRI protocols like CNeuroMod can be conducted safely. See the paper for full results and a complete discussion.

The five-year CNeuroMod data collection phase is now complete, and we are deeply grateful to the participants who committed their time to this study, and to the Courtois Foundation for their visionary support.

We are now preparing a series of public data releases and publications that will continue to explore the many facets of this unique longitudinal dataset.

Stay tuned.

LinkedIn

This link will take you to a page that’s not on LinkedIn

Academia could indeed become the bedrock of a more democratic, pluralistic and progressive social media, but "it would require universities, as well as sector-wide organisations such as funding councils and learned societies, to recognise and take a stance in relation to these issues in a way they have thus far failed to do."

https://blogs.lse.ac.uk/impactofsocialsciences/2025/03/03/bluesky-will-trap-academics-in-the-same-way-twitter-x-did/

Pretty much exactly what we have also pointed out:
https://royalsocietypublishing.org/doi/10.1098/rsos.230207

Bluesky will trap academics in the same way Twitter/X did - Impact of Social Sciences

Commercial platforms & social media companies are designed to maximise switching costs to retain users. Will Bluesky do the same for academics?

Impact of Social Sciences - Maximizing the impact of academic research

« Oh, de toute façon ils ont déjà tellement de données sur moi, à quoi ça sert de m'en protéger... »

Bah non. C'est dommage de penser comme ça. Oui, le capitalisme de surveillance est omniprésent. Mais à chaque fois que vous y échappez vous les privez d'une bribe d'informations.

Commencez petit. Refusez les cookies au lieu d'accepter. Utilisez uBlock Origin. Puis passez à Firefox. Allez décocher les paramètres de confidentialité de vos sites. Désinstallez les applications inutiles. Quittez Gmail. Puis petit à petit, supprimez vos comptes, installez de l'open source, changez de clavier Android, passez à Linux.

Prenez votre temps. Mais reprenez le contrôle.

it's also running in a Fun video mode: 512x224!
This gets stretched out to the normal aspect ratio (8:7) but it does mean the game is effectively running with pixels that are twice as tell as they are wide.
Something I’m really looking forward to in 2025 is the rise of truly multimodal models—capable of jointly processing vision, language, audio, and actions. These systems are finally starting to approach the richness of data streams available to human 🧠s. A great example of this direction is outlined in this recent work: https://openreview.net/forum?id=K4FAFNRpko
VLAS: Vision-Language-Action Model with Speech Instructions for...

Vision-language-action models (VLAs) have recently become highly prevalent in robot manipulation due to its end-to-end architecture and impressive performance. However, current VLAs are limited to...

OpenReview

😀 ça y est, notre plateforme HelloQuitteX est compatible avec toutes les instances Mastodon ➡️ http://app.HelloQuitX.com
Si vous aviez un compte X :

- prenez 1 min pour associer votre compte X et votre compte Mastodon, cela permettra à ceux qui arrivent de vous trouver facilement. ⚠️ Faites-le maintenant depuis votre smartphone, c'est immédiat !

- Prenez 2min pour partager les fichiers followers/followees de votre archive X. ça nous permettra de reconnecter plus de monde ici.

Voir aussi notre FAQ : https://helloquittex.com/Comment-demenager-de-X-en-5min.html

HelloQuitteX

Libérez vos espaces numériques