Giuseppe Gallitto

4 Followers
30 Following
40 Posts
Medical Scientist at University Hospital Essen.
PhD student at Predictive Neuroscience Lab (PNI Lab), http://pni-lab.github.io
____
Neuroimaging 🧠
Human-Computer Interaction 🖥️
Artificial Intelligence 🦾
Open Science 🍃
Someone made a real life versoin of BMO with Ollama, a raspberry PI, and a 3d printer https://lobste.rs/s/9y863y #video #art #programming
https://www.youtube.com/watch?v=l5ggH-YhuAw
Someone made a real life versoin of BMO with Ollama, a raspberry PI, and a 3d printer

0 comments

Lobsters
The Future of [JetBrains] Fleet

0 comments

Lobsters

If you maintain an app has a Linux version and has/needs virtual camera output or video streaming (or already supports Spout2 on Windows builds), please get in touch!

I'm writing an API to make this easy on Linux and I want to hear about your use case ^^

Recurrent Expansion: A Pathway Toward the Next Generation of Deep Learning https://arxiv.org/abs/2507.08828 #stat.ML #cs.LG
Recurrent Expansion: A Pathway Toward the Next Generation of Deep Learning

This paper introduces Recurrent Expansion (RE) as a new learning paradigm that advances beyond conventional Machine Learning (ML) and Deep Learning (DL). While DL focuses on learning from static data representations, RE proposes an additional dimension: learning from the evolving behavior of models themselves. RE emphasizes multiple mappings of data through identical deep architectures and analyzes their internal representations (i.e., feature maps) in conjunction with observed performance signals such as loss. By incorporating these behavioral traces, RE enables iterative self-improvement, allowing each model version to gain insight from its predecessors. The framework is extended through Multiverse RE (MVRE), which aggregates signals from parallel model instances, and further through Heterogeneous MVRE (HMVRE), where models of varying architectures contribute diverse perspectives. A scalable and adaptive variant, Sc-HMVRE, introduces selective mechanisms and scale diversity for real-world deployment. Altogether, RE presents a shift in DL: from purely representational learning to behavior-aware, self-evolving systems. It lays the groundwork for a new class of intelligent models capable of reasoning over their own learning dynamics, offering a path toward scalable, introspective, and adaptive artificial intelligence. A simple code example to support beginners in running their own experiments is provided in Code Availability Section of this paper.

arXiv.org

Aggiunta l’emoji del Fedilug:  
: fedilug : (senza spazi)

Fedilug è il nuovo gruppo che riunisce tutti gli appassionati di Linux presenti nel Fediverso.
Chiunque può partecipare, indipendentemente dall’istanza, su Mastodon è possibile aggiungere l'emoji  nel proprio profilo.

Si può entrare nel Linux User Group del Fediverso seguendolo da qui:

 @linux

Il gruppo è ospitato su https://diggita.com, un’istanza Lemmy italiana.

#UnoLinux #Fedilug #linux #gnulinux

diggita lemmy social - Social di giornalismo partecipativo dal 2007, nel fediverso con lemmy dal 2024, dal 2025 parte dell'associazione noprofit Fedimedia APS

Lemmy

Yay! Preprint on this paper is finally out --it was so hard to do, because new LLMs kept coming out and blowing up our measurements. 😂

But the fact of the matter is, in the kind of experimental details we were trying to pull out of the scientific fMRI literature, LLMs have gotten as good as humans. This makes all sorts of fMRI data sharing and understanding possibly much easier.

#neuroscience #science

https://www.biorxiv.org/content/10.1101/2025.05.13.653828v1

Large Language Models Can Extract Metadata for Annotation of Human Neuroimaging Publications

We show that recent (mid-to-late 2024) commercial large language models (LLMs) are capable of good quality metadata extraction and annotation with very little work on the part of investigators for several exemplar real-world annotation tasks in the neuroimaging literature. We investigated the GPT-4o LLM from OpenAI which performed comparably with several groups of specially trained and supervised human annotators. The LLM achieves similar performance to humans, between 0.91 and 0.97 on zero-shot prompts without feedback to the LLM. Reviewing the disagreements between LLM and gold standard human annotations we note that actual LLM errors are comparable to human errors in most cases, and in many cases these disagreements are not errors. Based on the specific types of annotations we tested, with exceptionally reviewed gold-standard correct values, the LLM performance is usable for metadata annotation at scale. We encourage other research groups to develop and make available more specialized "micro-benchmarks," like the ones we provide here, for testing both LLMs, and more complex agent systems annotation performance in real-world metadata annotation tasks. ### Competing Interest Statement The authors have declared no competing interest. National Institute on Drug Abuse, https://ror.org/00fq5cm18, R01 DA053028

bioRxiv

Our new manuscript discussing pre-registration of predictive modelling studies and the importance of external validation is out now, published in GigaScience.

Give a look at it here:

https://doi.org/10.1093/gigascience/giaf036

#machinelearning #neuroscience #bioinformatics

A New Era for GPU Programming: NVIDIA Finally Adds Native Python Support to CUDA — Millions of Users Incoming?

For years, CUDA — the software toolkit developed by NVIDIA for GPU computing — has lacked native support for Python. But that’s finally changing. At the recent GTC conference, NVIDIA announced that…

Python in Plain English
On the cruelty of really teaching computing science (1988) | Lobsters

GitHub - deckard-designs/fruitstand: A library for regression testing llm prompts

A library for regression testing llm prompts. Contribute to deckard-designs/fruitstand development by creating an account on GitHub.

GitHub