3,013 neurons, half a million synapses: the complete #connectome of the whole #Drosophila larval brain!

Winding, Pedigo et al. 2022. "The connectome of an insect brain" https://www.biorxiv.org/content/10.1101/2022.11.28.516756v1

We’ve mapped and analysed its circuit architecture, from sensory neurons to brain output neurons, as reconstructed from volume electron microscopy, and here is what we found. 1/

#neuroscience #connectomics #vEM #volumeEM

Our map of the #Drosophila larval brain #connectome is complete, with all inputs and all outputs, and everything in between: all polysynaptic pathways from sensory neurons all the way to brain output neurons, across both brain hemispheres. 2/

#neuroscience #connectomics

Our analysis of the #Drosophila larval brain starts by recognizing that neurons are polarized: 95.5% of all brain neurons present clearly segregated axons and dendrites.

In the #connectome, we found 66% axo-dendritic synapses, 26% axo-axonic, 6% dendro-dendritic and 2% dendro-axonic.

This matters because inputs onto dendrites contribute to the integration function of a neuron; inputs onto an axon modulate its output. Analysing them separately makes sense.

#neuroscience #connectomics 3/

Having split the #Drosophila larval brain #connectome into 4 types of edges, hierarchical spectral clustering defined about 90 groups of neurons.

Remarkably, clusters defined by connectivity alone were internally consistent for other features, such as neuron morphology or function.

Clusters were sorted from sensory neurons (SNs) to descending neurons (DNs) using the Walk-Sort algorithm. To the right, example clusters with intracluster morphological similarity score using NBLAST.

4/

Next, we explored the #Drosophila larval #connectome with multi-hop signal cascades (left) that extended across synapses up to a depth of 5. We sorted neurons into labelled lines and multisensory (right).

Neurons were considered to receive sensory input when visited in most cascade iterations.

The majority of brain neurons integrate from all sensory types, but a few neurons integrated from only one sensory modality (labelled line) or from a combination.

#neuroscience #connectomics

5/

Then we studied recurrent circuits in the #Drosophila larval brain.

By starting bi-directional multi-hop signal cascades at any one cluster, we found that the cluster containing the dopaminergic neurons (DANs) of the centre for associative learning and memory, the insect mushroom body (MB), present the most cascades where the beginning and the end of the cascade is itself!

In other words DANs, which mediate learning, are the most recurrent neurons in the brain.

#neuroscience #connectomics 6/

With all descending neurons (DNs) mapped, we could have a look at how the #Drosophila larval brain drives locomotion.

By determining the spatial projection pattern of all axons of DNs, and the known contribution of each body segment to locomotion, we inferred which behaviours can be controlled by which DNs, and then, which brain neurons control those DNs.

#neuroscience #connectomics 7/

A huge THANK YOU to everyone that worked on this project for 10 years, starting with first co-authors Michael Winding and Ben Pedigo at University of Cambridge and Johns Hopkins. A collaboration with Marta Zlatic, Carey E. Priebe, and Joshua Vogelstein.

This work started at #HHMIJanelia and continued at the #MRCLMB in Cambridge, UK.

All neuron reconstructions were done painstakingly by hand with #CATMAID by over >80 people! Thanks so much!

#neuroscience #connectomics
/END

@albertcardona As far as you know, do the neural networks at the heart of machine (and deep?) learning attempt to mimic, or end up mimicking, dopaminergic neurons DANs? - which appear to be linked to the ability to learn. I imagine emulating sensory/multisensory neurons (SNs) would be useful in the field of robotics.

@alex_p_roe

Artificial neural networks work in a very different way to biological ones. For one, it takes a deep neural network to emulate the capabilities of a single pyramidal neuron in the mammalian cortex. And ANNs lack axo-axonic synapses, active dendritic spikes, redundant inputs across different dendritic branches, and more. All of which matter a lot and are the subject of a number of scientific publications. The differences are huge. Not at all comparable beyond: both are networks.

@albertcardona Thanks. Very interesting to hear that ANNs are not close to mammalian neural networks - which may mean they are unable to become sentient - could this be the cause of hallucination? I imagine we will need to build a copy of a brain and then “teach” it, although without sensory organs, this won’t be easy unless we create a replica of a living organism. Is that where your insect studies are heading?

@alex_p_roe For the time being I am content with mapping brain circuits and making sense of them through a combination of genetics, functional imaging, observation of behavioural perturbations, and computational modeling. All of this is possible in a tiny organism, and not at all on a large one, at least, not if one has the ambition of studying the complete brain at nanometre resolution.

As per the "hallucinations", large language models don't hallucinate. Instead, they are simply statistical models of language, and therefore what they generate is what is plausible, given a corpus of texts it was trained on. It does not reason. They also fall prey to the curse of dimensionality, where the higher the number dimensions, the more "empty space" – regions of the activation space where nothing makes sense – there is.

@albertcardona An interesting and clear reply. Thanks. Mapping brain circuits does sound interesting and is a way of understanding what brains do. Starting with insects sounds wise - it helps understand the basics and this knowledge can be applied to more complex brains in time and, one day, to human brains. I thought we were closer to understanding how brains work - seems not…in which case we can’t really create an artificial brain yet. This begs questions about artificial general intelligence.

@alex_p_roe

Artificial general intelligence would require at the very least explicitly modeling the world and predicting future trajectories for objects in it. At the very least. Indeed, not understanding yet how specific functions such as wayfinding/navigation, learning in its many forms, and perception of the own body work in the brain, there's little hope so far for a constructed system to do so, beyond very limited use cases in highly controlled and constrained environments.

@albertcardona Thanks for confirming what I’d suspected for a while - that this technology is very much in its infancy. I still find it fascinating and future iterations will no doubt be very powerful especially when combined with the kind of research being carried out by yourself and others until one day the ability to emulate human brains arrives - unlikely in my lifetime (am 60), I suspect…but quantum computing may help.
@albertcardona Now, re #AI and the so-called, and erroneously by the sounds of things, phenomenon of “hallucination”, I was not aware of the curse of dimensionality (Have looked it up) but it appears to be a(nother) good reason to be very wary of the output of LLMs for all but basic tasks. Using “toys” to complete serious tasks is asking for trouble IMO. There’s really zero intelligence in so-called LLM #AI is there?! Thanks again.

@alex_p_roe

"Intelligence" is a loaded word. An LLM models language and has been show to perform well in some tasks such as machine translation from say English to Spanish. Coupling LLMs with generative approaches is being called "AI", but again it's an input-output mapping, there's no explicit modeling of the world or of the self mind and body, and without several hundred million years of evolution in the tuning.

@albertcardona Yup, intelligence is a highly loaded word. But LLMs do not come close even if they can give the impression they do. Then there are different variations of intelligence - people who are good with numbers but not practical, people who can do many things - people who are good with languages, computers etc etc and dogs (amongst many other animals)!who are seemingly more intelligent than others. It’s a fascinating if complex field - plenty to keep you very busy for decades!
@albertcardona Here’s a rather worrying use of current state LLMs: https://arstechnica.com/ai/2025/07/ai-therapy-bots-fuel-delusions-and-give-dangerous-advice-stanford-study-finds/ - this really ought to be ended. It’s very dangerous.
AI therapy bots fuel delusions and give dangerous advice, Stanford study finds

Popular chatbots serve as poor replacements for human therapists, but study authors call for nuance.

Ars Technica
@alex_p_roe It’s frankly malpractice. Only someone who doesn't understand, or wilfully misunderstands because of a conflict of interest, could give the go ahead to such a thing as LLM-based counseling.
@albertcardona Completely agree. Someone needs to intervene to potentially save lives. BTW many thanks for your time and clear explanations. Greatly appreciated 🙂 I had heard of people working with fruit flies to understand brain function and now I know one of the people working in this field! Gotta love social media…sometimes 🙂😉

@alex_p_roe

You are welcome. To peek under the hood of modern "AI", see the transformer architecture [1] and stable diffusion [2]. Conceptually there isn't that much more into it, but of course these techniques draw from a mountain of prior breakthroughs including the U-net architecture, adversarial learning, style transfer, and a lot more. Complementary to these techniques see as well "The unreasonable effectiveness of data" by Halevy, Norvig, and Pereira [3], made practical by diminishing costs of computing, in particular Moore's law and the recent engineering feat that are modern GPUs. Of course Jevon's paradox [4] comes into play and our societies are now strained from water and energy consumption by data centers.

[1] https://en.m.wikipedia.org/wiki/Transformer_(deep_learning_architecture)
[2] https://en.m.wikipedia.org/wiki/Stable_Diffusion
[3] https://static.googleusercontent.com/media/research.google.com/ca//pubs/archive/35179.pdf
[4] https://en.m.wikipedia.org/wiki/Jevons_paradox

Transformer (deep learning architecture) - Wikipedia

@alex_p_roe Speaking of fruit fly research, you'd be amused or surprised to learn that the original U-net architecture (which today powers stable diffusion, among many other machine learning techniques) introduced in a paper by Ronneberger et al. (2015; https://arxiv.org/abs/1505.04597 ) was developed to perform image segmentation of fly neural tissue as imaged with electron microscopy, to reconstruct neurons and therefore map the brain connectome.

So all those "wasteful" research funding grants to fruit fly research motivated and led to the biggest discovery fueling the whole of the modern "AI" boom. One never knows where basic research will lead, it's impossible to predict. Hence basic research is not at all wasteful, on the contrary, it's essential, it's the foundation of a rich, wealthy, creative society. And also very cheap, comparatively: https://albert.rierol.net/tell/20160601_Unintended_consequences_of_untimely_research.html

Search also for the returns on the human genome project, or on the humble origins of DNA sequencing, to name just two among many.

#Drosophila #StableDiffusion #MachineLearning #academia

U-Net: Convolutional Networks for Biomedical Image Segmentation

There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net .

arXiv.org
@albertcardona Uh oh, I fear we are about to delve in the dark realm that is politics! I am under no illusion that ML and NN can be extremely useful - I follow a few science and tech websites and see articles about breakthroughs which have come about thanks to ML and NN-AI all the time. In the right hands, this development tech is a boon, in the wrong hands, it’s a weapon. (1/2)
I wasn’t aware that stable diffusion came about thanks, indirectly, to fruit fly research but I’m not particularly surprised, maybe faintly amused but I fully appreciate the value of research unlike too many who, alas, do not including far too many politicos, sadly. (2/2)
@albertcardona @alex_p_roe okay knowledge is linked, we get it. Wasteful how? Sounds like you're conflating impact with budget.
@QNFO
Looks like you missed the inverted commas.
@albertcardona @alex_p_roe
@albertcardona Thanks for all the links - I’ve already started exploring the information they lead to. Moving back to LLMs for a second, I don’t think calling them “toys” is totally fair as if their abilities are harnessed in appropriate contexts, they can be rather valuable tools - I know, as I have used some in some minor ways. And I’ve seen performance improvements too. Am sure your work will lead to fascinating discoveries, directly or indirectly 🙂 Fascinating stuff!
@albertcardona Have you observed any recurrent circuits between the MB/LH and the antennal lobe that would mediate top-down modulation (except the CSD)?
@silkesachse I haven’t looked explicitly, but there are recurrent loops between the MB and LH for sure. I wonder if Greg Jefferis lab has looked into recurrent loops between these and the antennal lobe. In Berck, Khadelwal et al 2016 we reported on a descending neuron in #Drosophila larva that synapses directly onto an ORN axon. There may be others that synapse onto LNs or other antennal lobe neurons. Happy to explore further.
@albertcardona Thank you so much for your reply! I will have a look at the Berck paper again.
@albertcardona Congrats, Albert and team! Would you say at this point there remains any "Terra Incognita"? Sets of neurons with no obvious high-level function?

@debivort Thanks! The work has now but started. While many parts of the brain are well-known, others can now be analyzed on the basis of the known #connectome which indicates how they relate to other, better known parts of the #Drosophila brain.

A huge help are the thousands upon thousands of genetic driver lines generated by #HHMIJanelia, useful to e.g., optogenetically manipulate single cell types consisting of one single pair of left-right neurons. The fly larva has many advantages :)

@albertcardona I guess I'm asking in the sense that 10 years ago, the wedge, antler, etc in the adult brain felt like "Terra Incognito" - we had no real idea what they did. Anything at all remaining like that in the larva?
@debivort Yes, some parts of the ventral brain, as regions, remain somewhat mysterious, but mostly because nobody ever studied them. Now, circuits understood elsewhere are weaved with them, so they should soon come to light. Also, some of them are part of the larval central complex—which I presented at the Central Complex meeting at Janelia last month.

@albertcardona That there are still some mysterious areas has a romantic appeal to me, but it's cool how rapidly the list is disappearing.

sorry I wasn't at that meeting!

@albertcardona wow. Has anyone designed a software controls system that mimics that architecture as a way to fly a robot?

@drdrowland The #connectome of the #Drosophila larval brain is now available for further studies. Looking forward to what everyone will come up with. See for example how its learning module inspired the design of a neural network that requires 50x less weights and learns as fast:

Hong J, Pavlic TP. An insect-inspired randomly, weighted neural network with random fourier features for neuro-symbolic relational learning. 2021 https://arxiv.org/abs/2109.06663

#neuroscience

An Insect-Inspired Randomly, Weighted Neural Network with Random Fourier Features For Neuro-Symbolic Relational Learning

Insects, such as fruit flies and honey bees, can solve simple associative learning tasks and learn abstract concepts such as "sameness" and "difference", which is viewed as a higher-order cognitive function and typically thought to depend on top-down neocortical processing. Empirical research with fruit flies strongly supports that a randomized representational architecture is used in olfactory processing in insect brains. Based on these results, we propose a Randomly Weighted Feature Network (RWFN) that incorporates randomly drawn, untrained weights in an encoder that uses an adapted linear model as a decoder. The randomized projections between input neurons and higher-order processing centers in the input brain is mimicked in RWFN by a single-hidden-layer neural network that specially structures latent representations in the hidden layer using random Fourier features that better represent complex relationships between inputs using kernel approximation. Because of this special representation, RWFNs can effectively learn the degree of relationship among inputs by training only a linear decoder model. We compare the performance of RWFNs to LTNs for Semantic Image Interpretation (SII) tasks that have been used as a representative example of how LTNs utilize reasoning over first-order logic to surpass the performance of solely data-driven methods. We demonstrate that compared to LTNs, RWFNs can achieve better or similar performance for both object classification and detection of the part-of relations between objects in SII tasks while using much far fewer learnable parameters (1:62 ratio) and a faster learning process (1:2 ratio of running speed). Furthermore, we show that because the randomized weights do not depend on the data, several decoders can share a single randomized encoder, giving RWFNs a unique economy of spatial scale for simultaneous classification tasks.

arXiv.org
@albertcardona thanks! that's exactly the kind of applications of living connectome measurements i love
@albertcardona does this provide sufficient info to simulate the drosophila larval brain?
@joshuabecker Depends on who you ask, but yes, one could now attempt simulating the whole #Drosophila larval brain. There's plenty of data to constraint the model in meaningful ways, and we are going to attempt just that. I hope others too; we'll learn different things from different models.
#neuroscience
@albertcardona Fantastic achievement. Now the question arises whether studying drosophila will tell us principles of brain function that generalise to zebra fish, mice, and primates.

@dickretired Thank you! I, for one, I'm enthusiastic at the ability to now formulate computational models of the #Drosophila brain on the basis of the known #connectome, and to explore the model experimentally thanks to the thousands of single-cell type genetic driver lines (GAL4 lines) for the optogenetic manipulation and monitoring of neural activity of all these neurons. Note that celll types in #Drosophila larvae are most often a single pair of left-right symmetric neurons.

#neuroscience

@albertcardona There is no doubt that we need simple models of the brain and simple behavior, eg zebra fish moving up v down. It is the only way we have a chance of understanding the whole system at work. We just have to hope that the results will generalise to mice and primates. But we have to study them too. Otherwise we will never find out.
@albertcardona @dickretired and we have to then hope the mouse and primate results generalise to humans… However I would say it depends on your question: these organisms can be of interest without the need for such generalisation. For instance, the motion detection algorithm used in the Logitech optical mouse is an implementation of the Reichardt correlator, discovered in beetles and characterised mechanistically in flies…

@neuralengine @albertcardona

i agree that any animal and any brain is worth studying for its own sake. But the hope in science is that simple models will provide an entry into understanding more complex ones.

@dickretired @neuralengine We've argued as much: by carefully selecting representative organisms, chosen among those with a complete set of body parts yet of small, tractable dimensions, we can infer the archetypal neural architecture of each bauplan. And generalize within its branch of the phylogenetic tree, and compare to determine what's unique and what's different, and the correlations with behaviour.

"Neural architectures in the light of comparative connectomics" https://www.sciencedirect.com/science/article/pii/S0959438821001185

@neuralengine @albertcardona @dickretired
Brains are just too beautiful to not be studied 🙂

@albertcardona
Congratulations, that's one gigantic amount of work!

And yet, the way you write it, you make it sound as if you tried to convey the concept that the flow of information through all of these neurons were from sensory neurons to motor neurons 😜 😈

(sorry, couldn't help, things like these really trigger me these days 😇 Really great work!)

@brembs Thanks! It is gigantic: 10 years of not only reconstructing all neurons, but also analysing and publishing individual subsets–the antennal lobe, the mushroom body, the optic lobe, the somatosensory system, the motor system, and circuits beyond–that help make sense of the rest of the brain and nerve cord. We publish here 2/3rds anew; the other 1/3rd we know quite well.

And yes: graph-theoretically, it's useful to study from sensory to motor ... but note the figure on recurrence!

@albertcardona
Again, congratulations! Increadible!
@albertcardona @brembs that's just fantastic. Congrats, all!

@albertcardona

Wow that is absolutely amazing!

@albertcardona
Beautiful!!
Looking forward to learning more about invertebrates. Unfortunately my knowledge is limited to vertebrates and a little bit of cephalopods.
@albertcardona Congratulations Albert and team- beautiful work & fantastic 'toot'orial to sculpt an great hook to the pre-print! Amazing!! 👏 👏 👏

@Mill_lab Thank you! There's a lot more in the paper, just tried to highlight a few points I find most salient.

It's been a while, a lifetime really, since that fateful October 2012, only 10 months into my tenure at #HHMIJanelia when I invited the whole #Drosophila larval field, at the bi-annual maggot meeting, to join us in mapping and analysing the wiring diagram of the whole central nervous system. We are getting there :)

@albertcardona 10 years (inc. a pandemic) seems like nothing for an awesome community project on the scale of all this! Really amazing...