210 Followers
197 Following
126 Posts
#Neuroscientist | Central complex bumpologist | Research scientist at Janelia
Google Scholarhttps://scholar.google.com/citations?user=zt3UYx0AAAAJ&hl=en
Janelia pagehttps://www.janelia.org/people/brad-hulse
Laura Grima et al provide new insights into the algorithms mice use when learning to forage in large environments with many potential resources! https://biorxiv.org/content/10.1101/2024.11.04.621923v2
đź“° "The Fly Disco: Hardware and software for optogenetics and fine-grained fly behavior analysis"
http://biorxiv.org/cgi/content/short/2024.11.04.621948v1?rss=1 #DrosophilaMelanogaster
#Drosophila
🔥 Apply to be a Theory Fellow @HHMIJanelia!

🖥️ Access state-of-the-art computational infrastructure
👩🏽‍🔬 Collaborate within our Computation & Theory research area
đź§Ş Tackle fundamental questions in #computationalbiology

Apply by Jan. 7 ➡️ https://janelia.link/theoryfellowprogram
Janelia Theory Fellow Program

The Computation & Theory (C&T) group at Janelia invites applications for the Janelia Theory Fellow program. Our group brings together expertise in computational and mathematical biology, biophysics, machine vision and learning, and theoretical neuroscience. We tackle fundamental questions in biology by maximizing the insight extracted from biological data and closing the loop

Janelia Research Campus

The NeuronBridge website, for matching #ElectronMicroscopy and #LightMicroscopy data for #Drosophila #Neuroscience research, now includes EM data from FlyWire. Getting it working was a great effort by @neomorphic, Cristian Goina, Hideo Otsuna, Robert Svirskas, and @konrad_rokicki. Here is a screen capture of looking up a neuron on FlyWire Codex, finding its match in NeuronBridge, then viewing the match in 3D with volume rendering in the browser.

https://neuronbridge.janelia.org

NeuronBridge

Anatomy search for Drosophila neurons in Janelia LM/EM datasets

@albertcardona

I also wrote a little blog post about the "unsung" behind-the-scenes heroes on the FlyWire project: https://flyconnecto.me/2024/10/02/flywire-is-live-%f0%9f%9a%80/

FlyWire is live! 🚀

Almost exactly a year ago we blogged about the finished map of the fruit fly brain. Today, we celebrate the publication of the two (much improved) papers - one led by the folks in Princeton, one led by us - that jointly describe this FlyWire brain dataset in Nature: Dorkenwald et al. describes the overall resource, the proofreading effort and showcases some high-level analyses of the dataset Schlegel et al. provides neuron annotations and validates the dataset against another (partial) brain map 3D rendering of the 80 endocrine neurons of the fruit fly brain. These neurons release neurohormones such as insulin-like peptides into the fly’s hemolymph.Data source: FlyWire.ai; Rendering by Philipp Schlegel (University of Cambridge/MRC LMB). 3D rendering of the 8k visual projection neurons connecting the fly’s visual system to the central brain. Data source: FlyWire.ai; Rendering by Philipp Schlegel (University of Cambridge/MRC LMB). 3D rendering of all ~75k neurons in the fly’s visual system. Data source: FlyWire.ai; Rendering by Philipp Schlegel (University of Cambridge/MRC LMB). 3D rendering of all ~140k neurons in the fruit fly brain.Data source: FlyWire.ai; Rendering by Philipp Schlegel (University of Cambridge/MRC LMB). See our media page for more images + videos! Here is an analogy that I find useful to explain this duet: Imagine having satellite images of the entire world and you want to turn them into Google maps or - even better - OpenStreetMap. The first thing you need to do is find and digitalize all the roads, buildings and natural structures such as rivers and lakes. At that point you can already generate instructions on how to get from coordinate A to coordinate B. But what you really want is to be able to ask “Show me how to get from 10 Downing Street to 21 Baker Street” or “Find me a nice pizzeria somewhere close by”. For that you need labels: street names, opening hours, reviews and so on. Makes sense so far? Good! Going back to the FlyWire connectome: The high-resolution electron microscopy (EM) data1 (Zheng et al. Cell, 2018) of a fly brain is analogous to the satellite image data - contains all the information but not very useful in its raw state. Using AI to extract neurons (Dorkenwald et al. Nat. Methods, 2022) and synapses (Heinrich et al. arXiv, 2018; Buhmann et al., Nat. Methods 2021) from the EM data, followed by human proofreading (the new Dorkenwald et al. Nature, 2024) is analogous to finding roads, buildings, lakes, etc. on the satellite images. Annotating the neurons with extra information such as cell type, transmitter, etc. (Eckstein et al. Cell, 2024; the new Schlegel et al. Nature, 2024) is analogous to adding names for streets and businesses, opening hours and so on. As you can see from the various publications sprinkled throughout the text above2, our two shiny new papers are the result of years of work not just by us but many others. And of course it doesn’t stop here: over the course of the next few months, there will be many more papers from labs all over the world using the FlyWire dataset for their work. Nature has put together a collection page to track those appearing as part of the paper “package”. Most (if not all) of the above information is also available through the various press releases and landing pages (see also the links at the bottom of this page). So instead of repeating things you may or may not already know, I’d like to focus on the people that didn’t get as much attention. Unsung heroes Invariably, there are people whose contributions end up falling a bit by the wayside - not out of maliciousness or neglect but when you try to get your paper published nobody seems interested in how the sausage was made (so to speak). As an author you then find yourself adding unsatisfying, half-sentenced thank-you notes to the paper’s acknowledgements section. To relieve my guilty conscience, I will use the second half of this post to tell you a bit about the behind-the-scenes people that didn’t end up in this particular limelight.  Outsourcing You can perhaps imagine that it is rather difficult for small teams to suddenly and quickly scale up their operations. In particular when you know that you will likely also have to downsize in a few months - either because the project is finished or because money is running out. That’s pretty much the situation we found ourselves in when we decided to go all in on FlyWire. While we did grow our team in Cambridge (at peak we had 17 people in the group), both we and Princeton ended up outsourcing parts of the work to specialists. On our end, we contracted Ariadne.ai3 who proofread around 14% of the central brain4 in addition to our own efforts. Aelysia5 helped with annotations and proofreading whenever things got a bit more tricky. Not contracted by us but by Princeton: several Seung lab alumni founded Zetta.ai which provides connectomes-as-a-service. They re-aligned the Bock lab’s original EM image data and ran the initial segmentation which the FlyWire consortium collectively worked to proofread over the last few years. Connectivity The FlyWire dataset has two key resources: the morphologies of all individual neurons and the network graph of how they connect to each other. Both are intrinsically linked - after all you can’t really connect to someone if they aren’t in physical proximity. However, when we started working on FlyWire in mid 2020, the only available data was the neuron segmentation. Consequently, we only ever looked at neuron morphologies and had little to no clue about their connectivity. At the time, there was a “someone will solve that later” attitude to the problem. And what do you know - someone did it! A lot of someones, in fact. The groundwork had been laid by Larissa Heinrich from the Saalfeld lab (Janelia Research Campus) who used AI to detect synaptic clefts from EM images. The second piece to the puzzle - predicting pre- and postsynaptic partners from the clefts - was provided by Julia Buhmann from the lab of Jan Funke (also Janelia Research Campus). Julia and Jan were kind enough to share their data ahead of publication and just like that6 we had connectivity for FlyWire neurons! Initially that huge (130M rows after some filtering) connectivity table was a bit clunky to handle but with a bit7 of software engineering querying connections is now pretty seamless. Synaptic cleft predictions in blue. Arrows indicate pre- to postsynaptic connections. As the icing on the cake, the Funke lab in collaboration with Alex Bates (then PhD student in the Jefferis lab) and to everyone’s surprise, managed to reliably predict neurotransmitter identities from the raw EM image data. This data was also kindly shared ahead of publication and is now used in many of the FlyWire papers. Software Stack The FlyWire project as it is today would not have been possible without a great many technical innovations on the software side. Here are shout-outs to some of the relevant people and projects (in no particular order): Jeremy Maitin-Shepard from Google developed Neuroglancer, a WebGL based viewer for volumetric (images, segmentation, etc) data. FlyWire and many other connectome projects use a modified version of Neuroglancer for proofreading.  The Seung lab and Zetta.ai built the tools to re-align and segment the image data.  Nico Kemnitz, Akhilesh Halageri and Sven Dorkenwald (then Seung lab) created PyChunkedGraph (part of the CAVE ecosystem, see below) which is the data management and proofreading backend underlying FlyWire. Will Silversmith developed various Python libraries (cloud-volume, kimimaro, igneous) to process and interact with connectomics data. A lot of our own tools use his tools under the hood. Forrest Collman, Casey Schneider-Mizell, Sven Dorkenwald, Derrick Brittain (all currently at the Allen Institute for Brain Science) and others develop and importantly maintain the “Connectome Annotation Versioning Engine” (CAVE). Without getting too much into the weeds: CAVE is enabling adding extra information on top of the neuron segmentation, crucially including (but not limited to) neuron annotations and synapses. Tech Support A lot of the work in the group relies on data and services hosted on our own servers at the MRC-LMB. The person making sure everything from SSL certificates to kernel updates runs smoothly is our own Andrew Champion8. Further reading: UKRI press release Princeton press release University of Vermont press release MRC-LMB news story Nature's landing and collection page for the FlyWire paper package FlyWire.ai homepage Codex (FlyWire data explorer) For raw data enthusiasts: Zenodo repository with connectivity data (by S. Dorkenwald) Zenodo repository with skeletons neuron skeletons and NBLAST scores Github with annotations and other data artefacts Edits 04/10/24: Corrected year for Dorkenwald et al. reference (2018 -> 2022) Added Nico Kemnitz as contributor to ChunkedGraph Added Derrick Brittain as contributor to CAVE Made a note that ChunkeGraph is part of the CAVE ecosystem Added link to Princeton press release

Fly Connectome

We are looking for exceptional #AIScientist|s to join our AI@HHMI initiative at @hhmijanelia. We want to uncover fundamental principles underlying complex biological systems that are inaccessible without new innovations that combine #AI with experimental design. We are strongly committed to #OpenScience and interdisciplinary collaboration.

https://hhmi.wd1.myworkdayjobs.com/en-US/External/job/Janelia-Research-Campus/AI-Scientist_R-3065

AI Scientist

Primary Work Address: 19700 Helix Drive, Ashburn, VA, 20147 Current HHMI Employees, click here to apply via your Workday account. Please note: We are hiring multiple AI Scientists and are open to various levels of experience. Intro: AI@HHMI: HHMI is investing $500 million over the next 10 years to support AI-driven projects and to embed AI systems throughout every stage of the scientific process in labs across HHMI. The AI initiative will be centered at HHMI’s Janelia Research Campus. Janelia has been at the forefront of AI-driven research in biology for more than 15 years. Its forward-thinking structure, centralized funding, and collaborative culture make it ideally suited to take this bold leap forward. To learn more about the initiative, visit here. About the Role: The Artificial Intelligence (AI) Scientist is an outstanding independent scientist who develops, implements, and executes innovative AI-based research projects in the pursuit of biological discoveries or the generation of new research tools. AI Scientists create, lead, or participate in research projects of highly variable scale, ranging from individual work to HHMI-wide strategic initiatives. The AI Scientist will work closely with scientists from across the broader HHMI community. They participate in, build, or supervise highly collaborative research teams consisting of other scientists, engineers, and technicians. We are hiring multiple AI Scientists and are open to various levels of experience. What We Provide: A competitive compensation package with comprehensive health and welfare benefits. In addition to the base salary described below, this position may also be eligible for incentive pay. Generous training and travel opportunities to workshops and conferences. Access to state- of- the- art computational infrastructure and extended capabilities provided through assistance from Janelia’s Support teams and Project teams. A healthy work-life balance, with on-site childcare, free gyms, on-campus housing, vibrant social and dining spaces, and shuttle-bus service to Janelia from Washington, DC metro area! Flexible work arrangements are available. Relocation assistance provided for non-local candidates who wish to transition to the Washington, DC, metro area. What You'll Do: Plan, initiate, and rigorously execute independent or collaborative AI-based research; may oversee personnel and budgets toward this goal. Perform scientific duties on new and varied problems where only general objectives are stated. Explore and develop new methods, skills, and tools. Make research outputs available through peer-reviewed scientific publications, high-quality datasets, code, and applications. Collaborate with other scientists, engineers, and technicians across project teams. Initiate and be flexible to transfer between projects and groups as appropriate and participate in reviewing other projects and scientists and providing constructive feedback and guidance. Perform advanced development work to obtain and maintain technical leadership in the field of AI. Assist HHMI and Janelia leadership in planning and execution of program-wide initiatives; ensure coordination with related efforts or other projects and research areas. Explore and develop new methods, skills, and tools. Make your research outputs available through peer-reviewed scientific publications, high-quality datasets, code, and applications. Adhere to the highest academic, scientific, and ethical standards in all professional activities, including the responsible use of AI technologies. What You Bring: Ph.D. degree in Computer Science, Artificial Intelligence, Machine Learning, Applied Mathematics, Physics, or related fields or equivalent combination of education and relevant experience. 5+ years of experience in an academic, industry, or government research position is required. Demonstrated innovative research in the field of AI, evidenced by peer-reviewed publication in the proceedings of relevant conferences or journals, the contribution of code to high-quality open-source code repositories, and/or creation of high-quality applications. Experience applying AI methods to interdisciplinary research, particularly in the biological sciences or related domains, is preferred. Expertise in the field of machine learning with a full understanding of the underlying principles and concepts, including supervised and unsupervised learning, deep learning, reinforcement learning, and probabilistic modeling. Recognized by others within and outside their institution as a leader in their area of expertise. Proficiency in AI/ML frameworks and tools such as PyTorch, JAX or similar. Strong programming and design skills in Python, Julia, C++, or Rust, with experience in developing and optimizing large-scale computational models. Experience with big data technologies and high-performance computing environments Familiarity with data science techniques, including data preprocessing, statistical analysis, and visualization. Results oriented with excellent communication skills. Please include a cover letter with your application. Physical Requirements: Remaining in a normal seated or standing position for extended periods of time; reaching and grasping by extending hand(s) or arm(s); dexterity to manipulate objects with fingers, for example using a keyboard; communication skills using the spoken word; ability to see and hear within normal parameters; ability to move about workspace. The position requires mobility, including the ability to move materials weighing up to several pounds (such as a laptop computer or tablet). Persons with disabilities may be able to perform the essential duties of this position with reasonable accommodation. Requests for reasonable accommodation will be evaluated on an individual basis. Please Note: This job description sets forth the job’s principal duties, responsibilities, and requirements; it should not be construed as an exhaustive statement, however. Unless they begin with the word “may,” the Essential Duties and Responsibilities described above are “essential functions” of the job, as defined by the Americans with Disabilities Act. Pay: In addition to the base salary described below, this position may also be eligible for incentive pay. AI Scientist I: 5+ years’ experience - $182,160.80 (minimum)- $227,701.00 (midpoint) - $296,011.30 (maximum) AI Scientist II: 10+ years’ experience - $213,564.00 (minimum) - $266,955.00 (midpoint) - $347,041.40 (maximum) AI Scientist III: 15+ years’ experience including program/project management experience - $245,145.60 (minimum) - $306,432.00 (midpoint) - $450,000.00 (maximum) Pay Type: Annual HHMI’s salary structure is developed based on relevant market data. HHMI considers a candidate’s education, previous experience, knowledge, skills and abilities, as well as internal equity when making job offers. Typically, a new hire for this position in this location is compensated between the minimum and midpoint. Your recruiter can share more about the specific pay range during the recruitment process. #LI-BG1 Compensation and Benefits Our employees are compensated from a total rewards perspective in many ways for their contributions to our mission, including competitive pay, exceptional health benefits, retirement plans, time off, and a range of recognition and wellness programs. Visit our Benefits at HHMI site to learn more. HHMI is an Equal Opportunity Employer Howard Hughes Medical Institute (HHMI) is an independent, ever-evolving philanthropy that supports basic biomedical scientists and educators with the potential for transformative impact. We make long-term investments in people, not just projects, because we believe in the power of individuals to make breakthroughs over time. Why HHMI To move science forward we need a diverse collection of talents, expertise, and backgrounds in scientific research and science education, as well as communications, finance, human resources, information technology, investments, law, and operations. At HHMI, we encourage collaborative and results-driven working styles and offer an adaptable environment where employees can do their best work. What makes us strong is the diversity of our perspectives. We work to promote a culture of inclusion in our work environments and across the greater scientific community. To find more information about us and the steps we're taking to make HHMI a more inclusive organization, visit our About Us page. Your best option for consideration in our career opportunities is to apply directly via our HHMI Careers site. There, you will learn more about HHMI and can find information about our available roles. Contact us at [email protected] if you require an accommodation related to completing the job application. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. Applicants have rights under federal employment laws. For more information about your rights as an applicant, please review these posters: Family and Medical Leave Act (FMLA), Know Your Rights, and Employee Polygraph Protection Act (EPPA).

Absolutely beautiful work from Stephen Huston and coauthors finally published yesterday:
"Motor neurons generate pose-targeted movements via proprioceptive sculpting"
https://www.nature.com/articles/s41586-024-07222-5
Motor control is really complicated and so so interesting. This work embraces that complexity to dissect control of a fly's neck with pretty unexpected but clear results. Rather than each motor neuron encoding a single movement direction, the direction of movement depends on the pose of the head. 1/n
Motor neurons generate pose-targeted movements via proprioceptive sculpting - Nature

Single motor neurons in Drosophila are stimulated to show that they direct head movements towards specific postures rather than generating fixed movement vectors, suggesting that the brain controls movements through a continuing proprioceptive–motor loop.

Nature
New work from a collaboration between several labs at Janelia and Deepmind, in which we built an anatomically-detailed, biomechanical model of a fly in the physics simulator Mujoco, then used reinforcement and imitation learning to train it to walk and fly like a fly. I'm excited to work to figure out how this can help us understand motor control, sensory integration, and embodied cognition.
Preprint: https://www.biorxiv.org/content/10.1101/2024.03.11.584515v1
News story: https://www.janelia.org/news/artificial-intelligence-brings-a-virtual-fly-to-life

The Janelia #Unity Toolkit now supports panoramas, back-projected onto a cylindrical screen, updated in real time for a tracked viewpoint within the cylinder. The intended application is #VR for #Neuroscience studies of animals like #Drosophila. The code is a byproduct of another project so it's a bit experimental, but it's fun to watch examples like this one, meant to be displayed with three adjoining projectors. (1/2)

https://github.com/JaneliaSciComp/janelia-unity-toolkit

GitHub - JaneliaSciComp/janelia-unity-toolkit: Packages for the Unity game engine, emphasizing animal studies in VR.

Packages for the Unity game engine, emphasizing animal studies in VR. - JaneliaSciComp/janelia-unity-toolkit

GitHub
I guess we do #introduction posts over here? I work on the #neuroscience (am I doing those hashtags right!?) of learning and memory, specifically how we learn while we navigate space and context. To do this, I take in vivo recordings (currently calcium imaging but ephys has my heart) of freely moving rats! After that, I use computational and mathematical approaches to analyze their neural activity! I am currently a BRAIN Initiative K99/R00 postdoc at Northwestern working with John Disterhoft and Sara Solla. I was trained at MIT with Matt Wilson, where I got my PhD in biology, and my BS is from Carnegie Mellon. Welcome!