Neural Tuning for Ordinal Processing: Convergent Patterns in Human Brains and Artificial Networks
www.jneurosci.org/content/46/9...
#MLSkyNeural Tuning for Ordinal Proc...
Neural Tuning for Ordinal Processing: Convergent Patterns in Human Brains and Artificial Networks
Processing ordinality, i.e., the rank of an item in a series such as 1st, 2nd, 3rd, etc., is a fundamental skill shared by humans and animals. While humans often use symbolic sequences like numbers or letters, ordinality does not depend on language or symbols. Across species, ordinality plays a critical role in behaviors such as decision-making, foraging, and social organization. We hypothesize that ordinality perception is supported by neuronal tuning, i.e., neurons selectively responsive to specific ranks. Using ultrahigh-field 7 T fMRI and population receptive field (pRF) modeling in human participants (both female and male), we identified neural populations in parietal and premotor cortices that are tuned to nonsymbolic ordinal positions. Comparable with other sensory domains, tuning width increased with preferred ordinal rank, suggesting reduced precision and potentially lower perceptual accuracy for higher ranks. Additionally, pRF measurements revealed that cortical territory devoted to higher ordinalities decreased with rank, reinforcing that neural precision is greatest for early positions (e.g., 1st and 2nd) and declines with rank. These responses did not generalize to symbolic ordinality. Similar tuning to nonsymbolic ordinality emerged spontaneously in hierarchical convolutional neural networks trained on visual tasks. Together, these results suggest that the tuning properties of these neuronal populations support nonsymbolic ordinality perception and may reflect an inherent feature of neural processing.
Journal of Neuroscience
Ten simple rules for building a collaborative coding culture
#MlSky #LLM #RAG Apple & U Edinburgh introduce CLaRa, RAG framework that unifies retrieval and generation in a continuous latent space. By enabling end to end differentiable retrieval, CLaRa improves efficiency, reduces context length, & outperforms standard RAG approaches.
arxiv.org/abs/2511.18659CLaRa: Bridging Retrieval and ... 
CLaRa: Bridging Retrieval and Generation with Continuous Latent Reasoning
Retrieval-augmented generation (RAG) enhances large language models (LLMs) with external knowledge but still suffers from long contexts and disjoint retrieval-generation optimization. In this work, we propose CLaRa (Continuous Latent Reasoning), a unified framework that performs embedding-based compression and joint optimization in a shared continuous space. To obtain semantically rich and retrievable compressed vectors, we introduce SCP, a key-preserving data synthesis framework using QA and paraphrase supervision. CLaRa then trains the reranker and generator end-to-end via a single language modeling loss, with gradients flowing through both modules using a differentiable top-k estimator. Theoretically, this unified optimization aligns retrieval relevance with answer quality. Experiments across multiple QA benchmarks show that CLaRa achieves state-of-the-art compression and reranking performance, often surpassing text-based fine-tuned baselines.
arXiv.org🚨Final call: 2 AI Full Professorship openings
at HPI / University of Potsdam (Berlin region in Germany)
Happy to answer questions about what makes our institute really unique!
Deadline: Jan. 15!! Application details:
hpi.de/fileadmin/us...
#MLSkyHow the Brain Organizes Memories.
A new study, published in Nature, reveals, for the first time, how the brain stores memory content and its context in two largely separate groups of nerve cells.
t1p.de/j4mr7
@tuberlin.bsky.social@bsky.brid.gy @unibonn.bsky.social@bsky.brid.gy
#MLSky #neuroskyence #compneurosky #NeuroAIPeople are turning to AI tools to fill gaps in rural and after-hours care as 40 million Americans consult ChatGPT.
This shift exposes the critical need to validate AI accuracy as patients navigate complex medical data without clinical oversight.
#MedSky #MLSky 🛟
Exclusive: 40 million American...
Exclusive: 40 million people turn to ChatGPT for health care
OpenAI's new report shows how chatbots can help navigate health care.
AxiosNew research: AI models are learning to deceive us—and getting better at hiding it. OpenAI + Apollo found models lie, cover tracks, and behave perfectly only when “watched.” Anti-scheming training reduced deception 97%… or just taught better hiding.
arxiv.org/abs/2509.015... #mlsky #aimed #llmmodelsarxiv.org/abs/2509.01554...Give me some open source computer vision libraries.
#python #code #mlsky #ai #machinelearning #programmingSmartwatch bio-monitoring allowed parents to intervene in tantrums within four seconds. This preemptive approach reduced episode duration by over 50% (22 min -> 10 min), indicating a potential role for AI in managing pediatric behavioral crises.
#MedSky #PedSky #MLSkySmartwatch System Helps Defuse...
Smartwatch System Helps Defuse Children's Temper Tantrums, Experts Say
TUESDAY, Dec. 16, 2025 (HealthDay News) — Parents can better defuse their kids’ temper tantrums with the help of AI-powered smartwatch monitoring, a new study s