Bayesian-calibrated global sensitivity analysis for mathematical models using generative AI
journals.plos.org/ploscompbiol...
#MLSkyBayesian-calibrated global sen...
Bayesian-calibrated global sensitivity analysis for mathematical models using generative AI
Author summary In this research, we introduce a novel approach for conducting global sensitivity analysis in biological models using generative AI. Our method is fully compatible with Bayesian inference, which is widely used for parameter calibration of biological systems. Unlike traditional sensitivity analyses that assume independent parameters or impose simplified dependence structures, our approach performs sensitivity analysis directly on Bayesian-calibrated posterior distributions, where parameter correlations are learned from observational data. As a result, the resulting sensitivity analysis reflects realistic, data relevant parameter sensitivities rather than purely structural sensitivities of an abstract model. The proposed framework is flexible, scalable, and broadly applicable to a wide range of deterministic models calibrated through Bayesian methods. Furthermore, the generative nature of the approach paves the way for future extensions to distributional sensitivity analysis in stochastic or agent-based models, enhancing its potential for modern biological applications.
Robust Reinforcement Learning via Leveraging Historically Optimal Policy With Regulation of Performance
ieeexplore.ieee.org/document/114...
#MLSkyRobust Reinforcement Learning ...Fortifying Robustness in Graph Neural Networks: A Loss Correction Approach to Mitigate Label Noise
ieeexplore.ieee.org/document/114...
#MLSkyieeexplore.ieee.org/document/11428...Learning collision risk proactively from naturalistic driving data at scale
www.nature.com/articles/s42...
#MLSkyLearning collision risk proact...
Learning collision risk proactively from naturalistic driving data at scale - Nature Machine Intelligence
Jiao et al. introduce a generalized safety measure for autonomous driving systems that learns collision risk from everyday driving without labels. It accurately warns in real time of crashes and near-crashes and secures time for an early reaction.
NatureNeural Tuning for Ordinal Processing: Convergent Patterns in Human Brains and Artificial Networks
www.jneurosci.org/content/46/9...
#MLSkyNeural Tuning for Ordinal Proc...
Neural Tuning for Ordinal Processing: Convergent Patterns in Human Brains and Artificial Networks
Processing ordinality, i.e., the rank of an item in a series such as 1st, 2nd, 3rd, etc., is a fundamental skill shared by humans and animals. While humans often use symbolic sequences like numbers or letters, ordinality does not depend on language or symbols. Across species, ordinality plays a critical role in behaviors such as decision-making, foraging, and social organization. We hypothesize that ordinality perception is supported by neuronal tuning, i.e., neurons selectively responsive to specific ranks. Using ultrahigh-field 7 T fMRI and population receptive field (pRF) modeling in human participants (both female and male), we identified neural populations in parietal and premotor cortices that are tuned to nonsymbolic ordinal positions. Comparable with other sensory domains, tuning width increased with preferred ordinal rank, suggesting reduced precision and potentially lower perceptual accuracy for higher ranks. Additionally, pRF measurements revealed that cortical territory devoted to higher ordinalities decreased with rank, reinforcing that neural precision is greatest for early positions (e.g., 1st and 2nd) and declines with rank. These responses did not generalize to symbolic ordinality. Similar tuning to nonsymbolic ordinality emerged spontaneously in hierarchical convolutional neural networks trained on visual tasks. Together, these results suggest that the tuning properties of these neuronal populations support nonsymbolic ordinality perception and may reflect an inherent feature of neural processing.
Journal of Neuroscience
Ten simple rules for building a collaborative coding culture
Great video by
@[email protected] (
@[email protected])! If you want a clear, easy-to-follow intro to
#AI and how
#LLMs work, Andrew’s deep-dive series is well worth your time. Can’t wait for the next installments!
#LargeLanguageModels #UniMelb #MLSky #ResearchSky #AcademicSky
RE: https://bsky.app/profile/did:plc:op7eevqcejmhguigdma42vdp/post/3mcxxhobvik2d#MlSky #LLM #RAG Apple & U Edinburgh introduce CLaRa, RAG framework that unifies retrieval and generation in a continuous latent space. By enabling end to end differentiable retrieval, CLaRa improves efficiency, reduces context length, & outperforms standard RAG approaches.
arxiv.org/abs/2511.18659CLaRa: Bridging Retrieval and ... 
CLaRa: Bridging Retrieval and Generation with Continuous Latent Reasoning
Retrieval-augmented generation (RAG) enhances large language models (LLMs) with external knowledge but still suffers from long contexts and disjoint retrieval-generation optimization. In this work, we propose CLaRa (Continuous Latent Reasoning), a unified framework that performs embedding-based compression and joint optimization in a shared continuous space. To obtain semantically rich and retrievable compressed vectors, we introduce SCP, a key-preserving data synthesis framework using QA and paraphrase supervision. CLaRa then trains the reranker and generator end-to-end via a single language modeling loss, with gradients flowing through both modules using a differentiable top-k estimator. Theoretically, this unified optimization aligns retrieval relevance with answer quality. Experiments across multiple QA benchmarks show that CLaRa achieves state-of-the-art compression and reranking performance, often surpassing text-based fine-tuned baselines.
arXiv.org🚨Final call: 2 AI Full Professorship openings
at HPI / University of Potsdam (Berlin region in Germany)
Happy to answer questions about what makes our institute really unique!
Deadline: Jan. 15!! Application details:
hpi.de/fileadmin/us...
#MLSky