https://www.youtube.com/watch?v=l5ggH-YhuAw
PhD student at Predictive Neuroscience Lab (PNI Lab), http://pni-lab.github.io
____
Neuroimaging 🧠
Human-Computer Interaction 🖥️
Artificial Intelligence 🦾
Open Science 🍃
If you maintain an app has a Linux version and has/needs virtual camera output or video streaming (or already supports Spout2 on Windows builds), please get in touch!
I'm writing an API to make this easy on Linux and I want to hear about your use case ^^
This paper introduces Recurrent Expansion (RE) as a new learning paradigm that advances beyond conventional Machine Learning (ML) and Deep Learning (DL). While DL focuses on learning from static data representations, RE proposes an additional dimension: learning from the evolving behavior of models themselves. RE emphasizes multiple mappings of data through identical deep architectures and analyzes their internal representations (i.e., feature maps) in conjunction with observed performance signals such as loss. By incorporating these behavioral traces, RE enables iterative self-improvement, allowing each model version to gain insight from its predecessors. The framework is extended through Multiverse RE (MVRE), which aggregates signals from parallel model instances, and further through Heterogeneous MVRE (HMVRE), where models of varying architectures contribute diverse perspectives. A scalable and adaptive variant, Sc-HMVRE, introduces selective mechanisms and scale diversity for real-world deployment. Altogether, RE presents a shift in DL: from purely representational learning to behavior-aware, self-evolving systems. It lays the groundwork for a new class of intelligent models capable of reasoning over their own learning dynamics, offering a path toward scalable, introspective, and adaptive artificial intelligence. A simple code example to support beginners in running their own experiments is provided in Code Availability Section of this paper.
Aggiunta l’emoji del Fedilug:
: fedilug : (senza spazi)
Fedilug è il nuovo gruppo che riunisce tutti gli appassionati di Linux presenti nel Fediverso.
Chiunque può partecipare, indipendentemente dall’istanza, su Mastodon è possibile aggiungere l'emoji
nel proprio profilo.
Si può entrare nel Linux User Group del Fediverso seguendolo da qui:
Il gruppo è ospitato su https://diggita.com, un’istanza Lemmy italiana.
Yay! Preprint on this paper is finally out --it was so hard to do, because new LLMs kept coming out and blowing up our measurements. 😂
But the fact of the matter is, in the kind of experimental details we were trying to pull out of the scientific fMRI literature, LLMs have gotten as good as humans. This makes all sorts of fMRI data sharing and understanding possibly much easier.
We show that recent (mid-to-late 2024) commercial large language models (LLMs) are capable of good quality metadata extraction and annotation with very little work on the part of investigators for several exemplar real-world annotation tasks in the neuroimaging literature. We investigated the GPT-4o LLM from OpenAI which performed comparably with several groups of specially trained and supervised human annotators. The LLM achieves similar performance to humans, between 0.91 and 0.97 on zero-shot prompts without feedback to the LLM. Reviewing the disagreements between LLM and gold standard human annotations we note that actual LLM errors are comparable to human errors in most cases, and in many cases these disagreements are not errors. Based on the specific types of annotations we tested, with exceptionally reviewed gold-standard correct values, the LLM performance is usable for metadata annotation at scale. We encourage other research groups to develop and make available more specialized "micro-benchmarks," like the ones we provide here, for testing both LLMs, and more complex agent systems annotation performance in real-world metadata annotation tasks. ### Competing Interest Statement The authors have declared no competing interest. National Institute on Drug Abuse, https://ror.org/00fq5cm18, R01 DA053028
Our new manuscript discussing pre-registration of predictive modelling studies and the importance of external validation is out now, published in GigaScience.
Give a look at it here:
A New Era for GPU Programming: NVIDIA Finally Adds Native Python Support to CUDA
For years, CUDA — the software toolkit developed by NVIDIA for GPU computing — has lacked native support for Python. But that’s finally changing. At the recent GTC conference, NVIDIA announced that…
fruitstand: A Library for Regression Testing LLMs
https://github.com/deckard-designs/fruitstand
Discussions: https://discu.eu/q/https://github.com/deckard-designs/fruitstand