Adrian Valente

75 Followers
120 Following
82 Posts
I'm a small dynamical system, trying to make sense of a bigger dynamical system. Recent PhD graduate, still at ENS Paris with Srdjan Ostojic.
Websitehttps://adrian-valente.github.io/
Google Scholarhttps://scholar.google.com/citations?user=uyLai34AAAAJ&hl=fr

@albertcardona

I also wrote a little blog post about the "unsung" behind-the-scenes heroes on the FlyWire project: https://flyconnecto.me/2024/10/02/flywire-is-live-%f0%9f%9a%80/

FlyWire is live! 🚀

Almost exactly a year ago we blogged about the finished map of the fruit fly brain. Today, we celebrate the publication of the two (much improved) papers - one led by the folks in Princeton, one led by us - that jointly describe this FlyWire brain dataset in Nature: Dorkenwald et al. describes the overall resource, the proofreading effort and showcases some high-level analyses of the dataset Schlegel et al. provides neuron annotations and validates the dataset against another (partial) brain map 3D rendering of the 80 endocrine neurons of the fruit fly brain. These neurons release neurohormones such as insulin-like peptides into the fly’s hemolymph.Data source: FlyWire.ai; Rendering by Philipp Schlegel (University of Cambridge/MRC LMB). 3D rendering of the 8k visual projection neurons connecting the fly’s visual system to the central brain. Data source: FlyWire.ai; Rendering by Philipp Schlegel (University of Cambridge/MRC LMB). 3D rendering of all ~75k neurons in the fly’s visual system. Data source: FlyWire.ai; Rendering by Philipp Schlegel (University of Cambridge/MRC LMB). 3D rendering of all ~140k neurons in the fruit fly brain.Data source: FlyWire.ai; Rendering by Philipp Schlegel (University of Cambridge/MRC LMB). See our media page for more images + videos! Here is an analogy that I find useful to explain this duet: Imagine having satellite images of the entire world and you want to turn them into Google maps or - even better - OpenStreetMap. The first thing you need to do is find and digitalize all the roads, buildings and natural structures such as rivers and lakes. At that point you can already generate instructions on how to get from coordinate A to coordinate B. But what you really want is to be able to ask “Show me how to get from 10 Downing Street to 21 Baker Street” or “Find me a nice pizzeria somewhere close by”. For that you need labels: street names, opening hours, reviews and so on. Makes sense so far? Good! Going back to the FlyWire connectome: The high-resolution electron microscopy (EM) data1 (Zheng et al. Cell, 2018) of a fly brain is analogous to the satellite image data - contains all the information but not very useful in its raw state. Using AI to extract neurons (Dorkenwald et al. Nat. Methods, 2022) and synapses (Heinrich et al. arXiv, 2018; Buhmann et al., Nat. Methods 2021) from the EM data, followed by human proofreading (the new Dorkenwald et al. Nature, 2024) is analogous to finding roads, buildings, lakes, etc. on the satellite images. Annotating the neurons with extra information such as cell type, transmitter, etc. (Eckstein et al. Cell, 2024; the new Schlegel et al. Nature, 2024) is analogous to adding names for streets and businesses, opening hours and so on. As you can see from the various publications sprinkled throughout the text above2, our two shiny new papers are the result of years of work not just by us but many others. And of course it doesn’t stop here: over the course of the next few months, there will be many more papers from labs all over the world using the FlyWire dataset for their work. Nature has put together a collection page to track those appearing as part of the paper “package”. Most (if not all) of the above information is also available through the various press releases and landing pages (see also the links at the bottom of this page). So instead of repeating things you may or may not already know, I’d like to focus on the people that didn’t get as much attention. Unsung heroes Invariably, there are people whose contributions end up falling a bit by the wayside - not out of maliciousness or neglect but when you try to get your paper published nobody seems interested in how the sausage was made (so to speak). As an author you then find yourself adding unsatisfying, half-sentenced thank-you notes to the paper’s acknowledgements section. To relieve my guilty conscience, I will use the second half of this post to tell you a bit about the behind-the-scenes people that didn’t end up in this particular limelight.  Outsourcing You can perhaps imagine that it is rather difficult for small teams to suddenly and quickly scale up their operations. In particular when you know that you will likely also have to downsize in a few months - either because the project is finished or because money is running out. That’s pretty much the situation we found ourselves in when we decided to go all in on FlyWire. While we did grow our team in Cambridge (at peak we had 17 people in the group), both we and Princeton ended up outsourcing parts of the work to specialists. On our end, we contracted Ariadne.ai3 who proofread around 14% of the central brain4 in addition to our own efforts. Aelysia5 helped with annotations and proofreading whenever things got a bit more tricky. Not contracted by us but by Princeton: several Seung lab alumni founded Zetta.ai which provides connectomes-as-a-service. They re-aligned the Bock lab’s original EM image data and ran the initial segmentation which the FlyWire consortium collectively worked to proofread over the last few years. Connectivity The FlyWire dataset has two key resources: the morphologies of all individual neurons and the network graph of how they connect to each other. Both are intrinsically linked - after all you can’t really connect to someone if they aren’t in physical proximity. However, when we started working on FlyWire in mid 2020, the only available data was the neuron segmentation. Consequently, we only ever looked at neuron morphologies and had little to no clue about their connectivity. At the time, there was a “someone will solve that later” attitude to the problem. And what do you know - someone did it! A lot of someones, in fact. The groundwork had been laid by Larissa Heinrich from the Saalfeld lab (Janelia Research Campus) who used AI to detect synaptic clefts from EM images. The second piece to the puzzle - predicting pre- and postsynaptic partners from the clefts - was provided by Julia Buhmann from the lab of Jan Funke (also Janelia Research Campus). Julia and Jan were kind enough to share their data ahead of publication and just like that6 we had connectivity for FlyWire neurons! Initially that huge (130M rows after some filtering) connectivity table was a bit clunky to handle but with a bit7 of software engineering querying connections is now pretty seamless. Synaptic cleft predictions in blue. Arrows indicate pre- to postsynaptic connections. As the icing on the cake, the Funke lab in collaboration with Alex Bates (then PhD student in the Jefferis lab) and to everyone’s surprise, managed to reliably predict neurotransmitter identities from the raw EM image data. This data was also kindly shared ahead of publication and is now used in many of the FlyWire papers. Software Stack The FlyWire project as it is today would not have been possible without a great many technical innovations on the software side. Here are shout-outs to some of the relevant people and projects (in no particular order): Jeremy Maitin-Shepard from Google developed Neuroglancer, a WebGL based viewer for volumetric (images, segmentation, etc) data. FlyWire and many other connectome projects use a modified version of Neuroglancer for proofreading.  The Seung lab and Zetta.ai built the tools to re-align and segment the image data.  Nico Kemnitz, Akhilesh Halageri and Sven Dorkenwald (then Seung lab) created PyChunkedGraph (part of the CAVE ecosystem, see below) which is the data management and proofreading backend underlying FlyWire. Will Silversmith developed various Python libraries (cloud-volume, kimimaro, igneous) to process and interact with connectomics data. A lot of our own tools use his tools under the hood. Forrest Collman, Casey Schneider-Mizell, Sven Dorkenwald, Derrick Brittain (all currently at the Allen Institute for Brain Science) and others develop and importantly maintain the “Connectome Annotation Versioning Engine” (CAVE). Without getting too much into the weeds: CAVE is enabling adding extra information on top of the neuron segmentation, crucially including (but not limited to) neuron annotations and synapses. Tech Support A lot of the work in the group relies on data and services hosted on our own servers at the MRC-LMB. The person making sure everything from SSL certificates to kernel updates runs smoothly is our own Andrew Champion8. Further reading: UKRI press release Princeton press release University of Vermont press release MRC-LMB news story Nature's landing and collection page for the FlyWire paper package FlyWire.ai homepage Codex (FlyWire data explorer) For raw data enthusiasts: Zenodo repository with connectivity data (by S. Dorkenwald) Zenodo repository with skeletons neuron skeletons and NBLAST scores Github with annotations and other data artefacts Edits 04/10/24: Corrected year for Dorkenwald et al. reference (2018 -> 2022) Added Nico Kemnitz as contributor to ChunkedGraph Added Derrick Brittain as contributor to CAVE Made a note that ChunkeGraph is part of the CAVE ecosystem Added link to Princeton press release

Fly Connectome

Fantastic post by @mre and I agree with every point. Boggles my mind how much people willingly hand over to Google through Chrome when there’s a better option: Firefox.

(No need to tell me about the minor things Mozilla had done compared to Google or that your particular bank doesn’t work in Firefox—it’s possible to use more than one browser)

The Dying Web https://endler.dev/2024/the-dying-web/

The Dying Web | Matthias Endler

I look left and right, and I’m the only one who…

Des blessés, des arrestations, des saluts nazis, des hôtels pour demandeurs d'asile vandalisés... Le bilan de la semaine d'émeutes racistes au Royaume-Uni n'a pas intéressé TF1 et France 2 qui n'y ont consacré que 13 minutes sur 15h de JT : priorité aux JO.
https://www.arretsurimages.net/articles/dans-les-jt-les-emeutes-racistes-au-royaume-uni-eclipsees-par-les-jo
Dans les JT, les Ă©meutes racistes au Royaume-Uni Ă©clipsĂ©es par les JO - Par Camille Stineau | ArrĂŞt sur images

Alors que le Royaume-Uni est secoué depuis une semaine par une vague d’émeutes racistes, visant notamment les musulmans, le sujet est resté au second plan dans les JT de TF1 et France 2. Le caractère raciste des émeutes a quant à lui largement été euphémisé.

A quick history lesson. From 1940-1980:
•Wealthiest paid 70-94% marginal tax
•0 of them went broke from taxation
•0 of them left USA
•All remained exceedingly wealthy
•Manufacturing boomed
•The middle class was 62% of US economy (It's now 40% post 'trickle down scamenomics)
•We had the strongest middle class growth in US History

Let's do that again. Stop protecting billionaires. Start taxing them.
#TaxBillionaires

Needed some order in all those parameter-efficient finetuning methods?

arxiv.org/abs/2303.15647

https://arxiv.org/abs/2304.12410
@anyabelz

#NLProc #machinelearning
@anyabelz

PEFT-Ref: A Modular Reference Architecture and Typology for Parameter-Efficient Finetuning Techniques

Recent parameter-efficient finetuning (PEFT) techniques aim to improve over the considerable cost of fully finetuning large pretrained language models (PLM). As different PEFT techniques proliferate, it is becoming difficult to compare them, in particular in terms of (i) the structure and functionality they add to the PLM, (ii) the different types and degrees of efficiency improvements achieved, (iii) performance at different downstream tasks, and (iv) how differences in structure and functionality relate to efficiency and task performance. To facilitate such comparisons, this paper presents a reference architecture which standardises aspects shared by different PEFT techniques, while isolating differences to specific locations and interactions with the standard components. Through this process of standardising and isolating differences, a modular view of PEFT techniques emerges, supporting not only direct comparison of different techniques and their efficiency and task performance, but also systematic exploration of reusability and composability of the different types of finetuned modules. We demonstrate how the reference architecture can be applied to understand properties and relative advantages of PEFT techniques, hence to inform selection of techniques for specific tasks, and design choices for new PEFT techniques.

arXiv.org

Today is the last day for #early bird #registration to #Cosyne2023! If you're coming, and not yet registered, go do it now and save some money! Please boost!!!!

https://www.cosyne.org/registration

Registration — COSYNE

Registration is open for Computational and Systems Neuroscience (COSYNE) 2025 - 27 March - 1 April. Register early to reserve your place and to ensure you get the best rate.

COSYNE
We further tested this mechanism in a novel experiment, where the statistics of the inputs vary abruptly over time. Our model -where only the tonic input adapts to the varying statistics, not the recurrent connectivity- captures key features of the behavior and neural activity.
The geometrical signatures of such computational mechanism were consistent with activity recordings in frontal cortex.

Leveraging the low-rank constraint, we pinpointed the mechanism by which such networks solve the task.

In short, the connectivity generates low-dim manifolds in state space. The speed of neural dynamics along the manifolds is parametrically controlled by the tonic input.

@nicognitive
Instead, RNNs did generalize when two ingredients where combined! When (i) the connectivity constrained to be low-rank, and (ii) the input was provided tonically (constant during the whole trial).