Peer Herholz

@peerherholz@mastodon.online
287 Followers
247 Following
119 Posts
neuroscience, AI, methods/workflows/data; research affiliate at The Neuro (NeuroDataScience lab) 
 & The McGonvern Institute (Senseable Intelligence Group) | E&O chair of OHBM’s sustainability & environmental action SIG | freelancer | he/him
websitehttps://peerherholz.github.io/
GitHubhttps://github.com/peerherholz

Pleased to share my latest research "Zero-shot counting with a dual-stream neural network model" about a glimpsing neural network model that learns visual structure (here, number) in a way that generalises to new visual contents. The model replicates several neural and behavioural hallmarks of numerical cognition.

#neuralnetworks #cognition #neuroscience #generalization #vision #enactivism #enactiveCognition #cognitivescience #CognitiveNeuroscience #computationalneuroscience

https://arxiv.org/abs/2405.09953

Zero-shot counting with a dual-stream neural network model

Deep neural networks have provided a computational framework for understanding object recognition, grounded in the neurophysiology of the primate ventral stream, but fail to account for how we process relational aspects of a scene. For example, deep neural networks fail at problems that involve enumerating the number of elements in an array, a problem that in humans relies on parietal cortex. Here, we build a 'dual-stream' neural network model which, equipped with both dorsal and ventral streams, can generalise its counting ability to wholly novel items ('zero-shot' counting). In doing so, it forms spatial response fields and lognormal number codes that resemble those observed in macaque posterior parietal cortex. We use the dual-stream network to make successful predictions about behavioural studies of the human gaze during similar counting tasks.

arXiv.org

Card game developed by the G.H.O.S.T. collective: #OpenScience against Humanity

Presented at #TeamingUp2024

🧵 [1/n]
Why should you submit to the Human-centered Explainable AI workshop (#HCXAI) at #chi2024

Come for your love for amazing XAI research; stay for our supportive community.

That's what 300+ attendees from 18+ countries have done. Here's a snippet of what they think ⤵️

Join us and submit to the workshop! Deadline: Feb 14
hcxai.jimdosite.com

Please repost and help us spread the word. 🙏

#academia #mastodon #HCI #AI #ResponsibleAI #academicmastodon

An overview of the new EU AI regulation which help AI developers do #ai in a responsible human-in-the-loop way. Of course we should use AI where it makes sense to humans and the planet, but we need to set boundaries and safeguard democracy (= human control) https://dataethics.eu/how-the-eu-became-world-dhampion-in-democratic-regulation-of-artificial-intelligence/
How the EU Became World Champion in Democratic Regulation of Artificial Intelligence · Dataetisk Tænkehandletank

European entrepreneurs and their supporters are reluctant about new tight regulation of artificial intelligence, AI,...

Dataetisk Tænkehandletank
Want to work with me? We just got a DFG Project funded in which we build and evaluate transparent Computer vision models for reading and visual word recognition. If you are interested or you know someone that might, write me a note!!!

🎉 Tool for better documentation!! Release of sphinx-gallery, to automatically integrate narrative 🐍 examples in documentations
https://sphinx-gallery.github.io/stable/index.html

Highlight: a light recommender system to show related examples

An illustration of sphinx-gallery:
https://scikit-learn.org/dev/auto_examples/inspection/plot_linear_model_coefficient_interpretation.html
(from @sklearn 's gallery). Note the links to function docs.

Sphinx-gallery comes with awesome features such as
◼online execution with binder or jupyterlite
◼mini-galleries eg to link an object's docstring to its examples

Sphinx-Gallery — Sphinx-Gallery 0.14.0-git documentation

Two years ago, Dan Akarca & I wondered: Could the various features we observe in brains across species be caused by shared functional, structural & energetic constraints? 🧠⚡️

With our now published spatially embedded RNNs we show this is true!

🧵 below!
https://www.nature.com/articles/s42256-023-00748-9

#neuroscience #research #newpaper #ML #AI #neuroAI #computational #brain @neuroscience

[seRNN Thread 1/13]

Spatially embedded recurrent neural networks reveal widespread links between structural and functional neuroscience findings - Nature Machine Intelligence

A fundamental question in neuroscience is what are the constraints that shape the structural and functional organization of the brain. By bringing biological cost constraints into the optimization process of artificial neural networks, Achterberg, Akarca and colleagues uncover the joint principle underlying a large set of neuroscientific findings.

Nature
This [neurolibre preprint](https://neurolibre.org/papers/10.55458/neurolibre.00014) is probably unlike anything you've seen before. The science by Mathieu Boudreau, @agahkarakuzu and a large team of collaborators is fantastic, but I'm talking about the tech used for the preprint itself here. First it's not just a lame pdf preprint. It's got an [html version](https://preprint.neurolibre.org/10.55458/neurolibre.00014/) filled with interactive figures, and even a dashboard! But that's not what's unique. What really matters is that it is fully reproducible, and has been tested for it. By clicking on the small rocket, you can reproduce the figures yourself, from your browser. All the [data](https://doi.org/10.5281/zenodo.8419809), all the [code](https://doi.org/10.5281/zenodo.8419805), all the [dependencies](https://zenodo.org/records/8419811) have been published alongside the preprint, and the figures have been generated by the neurolibre servers, not by the authors! Each reproducibility artefact has its doi, and they are cleanly linked to the doi of the preprint. It is indexed by google scholar, orcid and the like. Neurolibre is based on the amazing Jupyter Book project, and authors can do 99% of the work themselves just by using Jupyter Book and the Neurolibre [technical docs](https://docs.neurolibre.org/en/latest/). The technical screening of the submission is automatized to a very large extent (it's been adapted from the awesome workflow of the journal of open source software). Check the publication process out, it's on github! https://github.com/neurolibre/neurolibre-reviews/issues/14 Disclaimer: I'm part of the Neurolibre development team. It's been a team effort (see details [here](https://neurolibre.org/about), but all of the recent heavy lifting on the platform has been done by @agahkarakuzu If I can say so myself, this really feels like the publication from the (reproducible) future. Please consider making your next publication a living research object, and submit to Neurolibre, it's open for beta! This project is part of the Canadian Open Neuroscience Platform (https://conp.ca/), funded by Brain Canada and several partners, including the Courtois foundation, the Montreal Heart Institute, and Cancer Computers.
Results of the ISMRM 2020 joint Reproducible Research & Quantitative MR study groups reproducibility challenge on phantom and human brain T1 mapping

Boudreau et al., (2023). Results of the ISMRM 2020 joint Reproducible Research & Quantitative MR study groups reproducibility challenge on phantom and human brain T1 mapping. NeuroLibre Reproducible Preprints, 14, https://doi.org/10.55458/neurolibre.00014

NeuroLibre

🎓👨‍🦱👩 Post-doc: From missing values to deep learning on sets
https://team.inria.fr/soda/job-offers

with myself and Marine le Morvan
at @Soda_Inria

Come work with us on an exciting topic across statistics and deep learning

Job offers – Soda – Computational and mathematical methods to understand health and society with data

From Sheeba Samuel and @EvoMRI

"Computational reproducibility of Jupyter notebooks from biomedical publications"

https://arxiv.org/abs/2308.07333

Found 27271 notebooks in 2660 GitHub repositories associated with 3467 articles

22578 notebooks were in Python, including 15817 that had dependencies declared in requirement files

For 10388, all declared dependencies could be installed successfully

1203 notebooks ran through without any errors

879 produced the original results

Computational reproducibility of Jupyter notebooks from biomedical publications

Jupyter notebooks facilitate the bundling of executable code with its documentation and output in one interactive environment, and they represent a popular mechanism to document and share computational workflows. The reproducibility of computational aspects of research is a key component of scientific reproducibility but has not yet been assessed at scale for Jupyter notebooks associated with biomedical publications. We address computational reproducibility at two levels: First, using fully automated workflows, we analyzed the computational reproducibility of Jupyter notebooks related to publications indexed in PubMed Central. We identified such notebooks by mining the articles full text, locating them on GitHub and re-running them in an environment as close to the original as possible. We documented reproduction success and exceptions and explored relationships between notebook reproducibility and variables related to the notebooks or publications. Second, this study represents a reproducibility attempt in and of itself, using essentially the same methodology twice on PubMed Central over two years. Out of 27271 notebooks from 2660 GitHub repositories associated with 3467 articles, 22578 notebooks were written in Python, including 15817 that had their dependencies declared in standard requirement files and that we attempted to re-run automatically. For 10388 of these, all declared dependencies could be installed successfully, and we re-ran them to assess reproducibility. Of these, 1203 notebooks ran through without any errors, including 879 that produced results identical to those reported in the original notebook and 324 for which our results differed from the originally reported ones. Running the other notebooks resulted in exceptions. We zoom in on common problems, highlight trends and discuss potential improvements to Jupyter-related workflows associated with biomedical publications.

arXiv.org