3,013 neurons, half a million synapses: the complete #connectome of the whole #Drosophila larval brain!

Winding, Pedigo et al. 2022. "The connectome of an insect brain" https://www.biorxiv.org/content/10.1101/2022.11.28.516756v1

We’ve mapped and analysed its circuit architecture, from sensory neurons to brain output neurons, as reconstructed from volume electron microscopy, and here is what we found. 1/

#neuroscience #connectomics #vEM #volumeEM

Our map of the #Drosophila larval brain #connectome is complete, with all inputs and all outputs, and everything in between: all polysynaptic pathways from sensory neurons all the way to brain output neurons, across both brain hemispheres. 2/

#neuroscience #connectomics

Our analysis of the #Drosophila larval brain starts by recognizing that neurons are polarized: 95.5% of all brain neurons present clearly segregated axons and dendrites.

In the #connectome, we found 66% axo-dendritic synapses, 26% axo-axonic, 6% dendro-dendritic and 2% dendro-axonic.

This matters because inputs onto dendrites contribute to the integration function of a neuron; inputs onto an axon modulate its output. Analysing them separately makes sense.

#neuroscience #connectomics 3/

Having split the #Drosophila larval brain #connectome into 4 types of edges, hierarchical spectral clustering defined about 90 groups of neurons.

Remarkably, clusters defined by connectivity alone were internally consistent for other features, such as neuron morphology or function.

Clusters were sorted from sensory neurons (SNs) to descending neurons (DNs) using the Walk-Sort algorithm. To the right, example clusters with intracluster morphological similarity score using NBLAST.

4/

Next, we explored the #Drosophila larval #connectome with multi-hop signal cascades (left) that extended across synapses up to a depth of 5. We sorted neurons into labelled lines and multisensory (right).

Neurons were considered to receive sensory input when visited in most cascade iterations.

The majority of brain neurons integrate from all sensory types, but a few neurons integrated from only one sensory modality (labelled line) or from a combination.

#neuroscience #connectomics

5/

Then we studied recurrent circuits in the #Drosophila larval brain.

By starting bi-directional multi-hop signal cascades at any one cluster, we found that the cluster containing the dopaminergic neurons (DANs) of the centre for associative learning and memory, the insect mushroom body (MB), present the most cascades where the beginning and the end of the cascade is itself!

In other words DANs, which mediate learning, are the most recurrent neurons in the brain.

#neuroscience #connectomics 6/

With all descending neurons (DNs) mapped, we could have a look at how the #Drosophila larval brain drives locomotion.

By determining the spatial projection pattern of all axons of DNs, and the known contribution of each body segment to locomotion, we inferred which behaviours can be controlled by which DNs, and then, which brain neurons control those DNs.

#neuroscience #connectomics 7/

A huge THANK YOU to everyone that worked on this project for 10 years, starting with first co-authors Michael Winding and Ben Pedigo at University of Cambridge and Johns Hopkins. A collaboration with Marta Zlatic, Carey E. Priebe, and Joshua Vogelstein.

This work started at #HHMIJanelia and continued at the #MRCLMB in Cambridge, UK.

All neuron reconstructions were done painstakingly by hand with #CATMAID by over >80 people! Thanks so much!

#neuroscience #connectomics
/END

@albertcardona As far as you know, do the neural networks at the heart of machine (and deep?) learning attempt to mimic, or end up mimicking, dopaminergic neurons DANs? - which appear to be linked to the ability to learn. I imagine emulating sensory/multisensory neurons (SNs) would be useful in the field of robotics.

@alex_p_roe

Artificial neural networks work in a very different way to biological ones. For one, it takes a deep neural network to emulate the capabilities of a single pyramidal neuron in the mammalian cortex. And ANNs lack axo-axonic synapses, active dendritic spikes, redundant inputs across different dendritic branches, and more. All of which matter a lot and are the subject of a number of scientific publications. The differences are huge. Not at all comparable beyond: both are networks.

@albertcardona Thanks. Very interesting to hear that ANNs are not close to mammalian neural networks - which may mean they are unable to become sentient - could this be the cause of hallucination? I imagine we will need to build a copy of a brain and then “teach” it, although without sensory organs, this won’t be easy unless we create a replica of a living organism. Is that where your insect studies are heading?

@alex_p_roe For the time being I am content with mapping brain circuits and making sense of them through a combination of genetics, functional imaging, observation of behavioural perturbations, and computational modeling. All of this is possible in a tiny organism, and not at all on a large one, at least, not if one has the ambition of studying the complete brain at nanometre resolution.

As per the "hallucinations", large language models don't hallucinate. Instead, they are simply statistical models of language, and therefore what they generate is what is plausible, given a corpus of texts it was trained on. It does not reason. They also fall prey to the curse of dimensionality, where the higher the number dimensions, the more "empty space" – regions of the activation space where nothing makes sense – there is.

@albertcardona An interesting and clear reply. Thanks. Mapping brain circuits does sound interesting and is a way of understanding what brains do. Starting with insects sounds wise - it helps understand the basics and this knowledge can be applied to more complex brains in time and, one day, to human brains. I thought we were closer to understanding how brains work - seems not…in which case we can’t really create an artificial brain yet. This begs questions about artificial general intelligence.

@alex_p_roe

Artificial general intelligence would require at the very least explicitly modeling the world and predicting future trajectories for objects in it. At the very least. Indeed, not understanding yet how specific functions such as wayfinding/navigation, learning in its many forms, and perception of the own body work in the brain, there's little hope so far for a constructed system to do so, beyond very limited use cases in highly controlled and constrained environments.

@albertcardona Thanks for confirming what I’d suspected for a while - that this technology is very much in its infancy. I still find it fascinating and future iterations will no doubt be very powerful especially when combined with the kind of research being carried out by yourself and others until one day the ability to emulate human brains arrives - unlikely in my lifetime (am 60), I suspect…but quantum computing may help.
@albertcardona Now, re #AI and the so-called, and erroneously by the sounds of things, phenomenon of “hallucination”, I was not aware of the curse of dimensionality (Have looked it up) but it appears to be a(nother) good reason to be very wary of the output of LLMs for all but basic tasks. Using “toys” to complete serious tasks is asking for trouble IMO. There’s really zero intelligence in so-called LLM #AI is there?! Thanks again.

@alex_p_roe

"Intelligence" is a loaded word. An LLM models language and has been show to perform well in some tasks such as machine translation from say English to Spanish. Coupling LLMs with generative approaches is being called "AI", but again it's an input-output mapping, there's no explicit modeling of the world or of the self mind and body, and without several hundred million years of evolution in the tuning.

@albertcardona Yup, intelligence is a highly loaded word. But LLMs do not come close even if they can give the impression they do. Then there are different variations of intelligence - people who are good with numbers but not practical, people who can do many things - people who are good with languages, computers etc etc and dogs (amongst many other animals)!who are seemingly more intelligent than others. It’s a fascinating if complex field - plenty to keep you very busy for decades!
@albertcardona Here’s a rather worrying use of current state LLMs: https://arstechnica.com/ai/2025/07/ai-therapy-bots-fuel-delusions-and-give-dangerous-advice-stanford-study-finds/ - this really ought to be ended. It’s very dangerous.
AI therapy bots fuel delusions and give dangerous advice, Stanford study finds

Popular chatbots serve as poor replacements for human therapists, but study authors call for nuance.

Ars Technica
@alex_p_roe It’s frankly malpractice. Only someone who doesn't understand, or wilfully misunderstands because of a conflict of interest, could give the go ahead to such a thing as LLM-based counseling.
@albertcardona Completely agree. Someone needs to intervene to potentially save lives. BTW many thanks for your time and clear explanations. Greatly appreciated 🙂 I had heard of people working with fruit flies to understand brain function and now I know one of the people working in this field! Gotta love social media…sometimes 🙂😉

@alex_p_roe

You are welcome. To peek under the hood of modern "AI", see the transformer architecture [1] and stable diffusion [2]. Conceptually there isn't that much more into it, but of course these techniques draw from a mountain of prior breakthroughs including the U-net architecture, adversarial learning, style transfer, and a lot more. Complementary to these techniques see as well "The unreasonable effectiveness of data" by Halevy, Norvig, and Pereira [3], made practical by diminishing costs of computing, in particular Moore's law and the recent engineering feat that are modern GPUs. Of course Jevon's paradox [4] comes into play and our societies are now strained from water and energy consumption by data centers.

[1] https://en.m.wikipedia.org/wiki/Transformer_(deep_learning_architecture)
[2] https://en.m.wikipedia.org/wiki/Stable_Diffusion
[3] https://static.googleusercontent.com/media/research.google.com/ca//pubs/archive/35179.pdf
[4] https://en.m.wikipedia.org/wiki/Jevons_paradox

Transformer (deep learning architecture) - Wikipedia

@alex_p_roe Speaking of fruit fly research, you'd be amused or surprised to learn that the original U-net architecture (which today powers stable diffusion, among many other machine learning techniques) introduced in a paper by Ronneberger et al. (2015; https://arxiv.org/abs/1505.04597 ) was developed to perform image segmentation of fly neural tissue as imaged with electron microscopy, to reconstruct neurons and therefore map the brain connectome.

So all those "wasteful" research funding grants to fruit fly research motivated and led to the biggest discovery fueling the whole of the modern "AI" boom. One never knows where basic research will lead, it's impossible to predict. Hence basic research is not at all wasteful, on the contrary, it's essential, it's the foundation of a rich, wealthy, creative society. And also very cheap, comparatively: https://albert.rierol.net/tell/20160601_Unintended_consequences_of_untimely_research.html

Search also for the returns on the human genome project, or on the humble origins of DNA sequencing, to name just two among many.

#Drosophila #StableDiffusion #MachineLearning #academia

U-Net: Convolutional Networks for Biomedical Image Segmentation

There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net .

arXiv.org
@albertcardona Uh oh, I fear we are about to delve in the dark realm that is politics! I am under no illusion that ML and NN can be extremely useful - I follow a few science and tech websites and see articles about breakthroughs which have come about thanks to ML and NN-AI all the time. In the right hands, this development tech is a boon, in the wrong hands, it’s a weapon. (1/2)
I wasn’t aware that stable diffusion came about thanks, indirectly, to fruit fly research but I’m not particularly surprised, maybe faintly amused but I fully appreciate the value of research unlike too many who, alas, do not including far too many politicos, sadly. (2/2)
@albertcardona @alex_p_roe okay knowledge is linked, we get it. Wasteful how? Sounds like you're conflating impact with budget.
@QNFO
Looks like you missed the inverted commas.
@albertcardona @alex_p_roe
@albertcardona Thanks for all the links - I’ve already started exploring the information they lead to. Moving back to LLMs for a second, I don’t think calling them “toys” is totally fair as if their abilities are harnessed in appropriate contexts, they can be rather valuable tools - I know, as I have used some in some minor ways. And I’ve seen performance improvements too. Am sure your work will lead to fascinating discoveries, directly or indirectly 🙂 Fascinating stuff!
@albertcardona Have you observed any recurrent circuits between the MB/LH and the antennal lobe that would mediate top-down modulation (except the CSD)?
@silkesachse I haven’t looked explicitly, but there are recurrent loops between the MB and LH for sure. I wonder if Greg Jefferis lab has looked into recurrent loops between these and the antennal lobe. In Berck, Khadelwal et al 2016 we reported on a descending neuron in #Drosophila larva that synapses directly onto an ORN axon. There may be others that synapse onto LNs or other antennal lobe neurons. Happy to explore further.
@albertcardona Thank you so much for your reply! I will have a look at the Berck paper again.
@albertcardona Congrats, Albert and team! Would you say at this point there remains any "Terra Incognita"? Sets of neurons with no obvious high-level function?

@debivort Thanks! The work has now but started. While many parts of the brain are well-known, others can now be analyzed on the basis of the known #connectome which indicates how they relate to other, better known parts of the #Drosophila brain.

A huge help are the thousands upon thousands of genetic driver lines generated by #HHMIJanelia, useful to e.g., optogenetically manipulate single cell types consisting of one single pair of left-right neurons. The fly larva has many advantages :)

@albertcardona I guess I'm asking in the sense that 10 years ago, the wedge, antler, etc in the adult brain felt like "Terra Incognito" - we had no real idea what they did. Anything at all remaining like that in the larva?
@debivort Yes, some parts of the ventral brain, as regions, remain somewhat mysterious, but mostly because nobody ever studied them. Now, circuits understood elsewhere are weaved with them, so they should soon come to light. Also, some of them are part of the larval central complex—which I presented at the Central Complex meeting at Janelia last month.

@albertcardona That there are still some mysterious areas has a romantic appeal to me, but it's cool how rapidly the list is disappearing.

sorry I wasn't at that meeting!