In a new #ICLR2026 publication we provide a novel algorithm for semi-analytically constructing the stable and unstable manifolds of fixed points and cycles of ReLU-based RNNs:
https://openreview.net/pdf?id=EAwLAwHvhk

Why is this important?

Because it provides insight into why and how trained RNNs produce their behavior, as important for scientific and medical applications and explainable AI more generally. In scientific ML, RNNs are a common tool for *dynamical systems reconstruction* (https://www.nature.com/articles/s41583-023-00740-7), where models are trained to approximate the dynamical system underlying observed time series. Trained RNNs are then to be analyzed further as formal surrogates of the systems trained on.

An RNN’s dynamical repertoire depends on the topological and geometrical properties of its state space. Stable and unstable manifolds of fixed and periodic points dissect a dynamical system’s state space into different *basins of attraction*, their intersections lead to chaotic dynamics with fractal geometry, and – more generally – they provide a type of skeleton for the system’s dynamics, forming structures like separatrix cycles or heteroclinic channels.

📢 𝗨𝗞𝗣 𝗟𝗮𝗯 𝗮𝘁 𝗜𝗖𝗟𝗥𝟮𝟬𝟮𝟲 📢
Happy to share that our paper has been accepted to #ICLR2026 🎉

📜 𝚁𝚎𝚟𝚎𝚕𝚊: 𝗗𝗲𝗻𝘀𝗲 𝗥𝗲𝘁𝗿𝗶𝗲𝘃𝗲𝗿 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 𝘃𝗶𝗮 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗠𝗼𝗱𝗲𝗹𝗶𝗻𝗴

👥 Fengyu Cai, Tong Chen, Xinran Zhao, Sihao Chen, Hongming Zhang, Sherry Tongshuang Wu, Iryna Gurevych, Heinz Koeppl

ICLR 2026 tổng hợp: Cộng đồng nghiên cứu tập trung vào GRPO (157 bài) thay vì DPO, ưu tiên RLVR (125 bài) thay vì RLHF, và 202 bài về Mamba/SSMs. Nait (tuning thông minh chỉ 10% dữ liệu) giúp tối ưu hiệu quả. 257 bài về tính toán lúc test, 123 bài về hallucination. Cảnh báo: mô hình tuân thủ tốt dễ bị tấn công injection. #AI #HọcMáy #ICLR2026 #NCKH #DeepLearning #Mamba #RLVR #GRPO #MạngNeural #BảoMậtAI #ViễnTưởngAI

https://www.reddit.com/r/LocalLLaMA/comments/1qsh7dz/analyzed_5357_iclr_2026_acc

Hệ thống đa tác nhân thất bại vì 5 vấn đề chính: độ trễ, chi phí token, sai sót dây chuyền, cấu trúc giòn và khả năng quan sát. 14 bài báo ICLR 2026 đề xuất các giải pháp như hành động dự đoán, chia sẻ KV và cây quyết định có cấu trúc. #HệThốngĐaTácNhân #ICLR2026 #TríTuệNhânTạo #MultiAgentSystems #AI

https://www.reddit.com/r/LocalLLaMA/comments/1qs5t82/14_iclr_2026_papers_on_why_multiagent_systems/

Language Technology Group at the University of Oslo has two papers accepted at #ICLR2026!

- Dual Language Models: Balancing Training Efficiency and Overfitting Resilience by David Samuel and Lucas Charpentier

- Fluent Alignment with Disfluent Judges: Post-training for Lower-resource Languages by David Samuel, Lilja Øvrelid, Erik Velldal and Andrey Kutuzov

Details and links in the thread:

#NLProc #Norway #Norge #UiO

Excited to share that our paper “RobustSpring: Benchmarking Robustness to Image Corruptions for Optical Flow, Scene Flow and Stereo” has been accepted to #ICLR2026 🎉.

We introduce RobustSpring, a new benchmark that evaluates not only accuracy but also robustness of optical flow, scene flow, and stereo models under 20 real‑world image corruptions.

Congratulations to the authors!

🌐 For more news: https://www.collaborative-ai.org/publications/oei26_iclr/

#LaVCa: LLM-assisted visual cortex captioning arxiv.org/abs/2502.13606 using "large language models (LLMs) to generate natural-language captions for images to which voxels are selective"; to be presented at #ICLR2026; #BCI #NeuroTech

LaVCa: LLM-assisted Visual Cor...
LaVCa: LLM-assisted Visual Cortex Captioning

Understanding the property of neural populations (or voxels) in the human brain can advance our comprehension of human perceptual and cognitive processing capabilities and contribute to developing brain-inspired computer models. Recent encoding models using deep neural networks (DNNs) have successfully predicted voxel-wise activity. However, interpreting the properties that explain voxel responses remains challenging because of the black-box nature of DNNs. As a solution, we propose LLM-assisted Visual Cortex Captioning (LaVCa), a data-driven approach that uses large language models (LLMs) to generate natural-language captions for images to which voxels are selective. By applying LaVCa for image-evoked brain activity, we demonstrate that LaVCa generates captions that describe voxel selectivity more accurately than the previously proposed method. Furthermore, the captions generated by LaVCa quantitatively capture more detailed properties than the existing method at both the inter-voxel and intra-voxel levels. Furthermore, a more detailed analysis of the voxel-specific properties generated by LaVCa reveals fine-grained functional differentiation within regions of interest (ROIs) in the visual cortex and voxels that simultaneously represent multiple distinct concepts. These findings offer profound insights into human visual representations by assigning detailed captions throughout the visual cortex while highlighting the potential of LLM-based methods in understanding brain representations. Please check out our webpage at https://sites.google.com/view/lavca-llm/

arXiv.org
#LaVCa: LLM-assisted visual cortex captioning https://arxiv.org/abs/2502.13606 using "large language models (LLMs) to generate natural-language captions for images to which voxels are selective"; to be presented at #ICLR2026; #BCI #NeuroTech
LaVCa: LLM-assisted Visual Cortex Captioning

Understanding the property of neural populations (or voxels) in the human brain can advance our comprehension of human perceptual and cognitive processing capabilities and contribute to developing brain-inspired computer models. Recent encoding models using deep neural networks (DNNs) have successfully predicted voxel-wise activity. However, interpreting the properties that explain voxel responses remains challenging because of the black-box nature of DNNs. As a solution, we propose LLM-assisted Visual Cortex Captioning (LaVCa), a data-driven approach that uses large language models (LLMs) to generate natural-language captions for images to which voxels are selective. By applying LaVCa for image-evoked brain activity, we demonstrate that LaVCa generates captions that describe voxel selectivity more accurately than the previously proposed method. Furthermore, the captions generated by LaVCa quantitatively capture more detailed properties than the existing method at both the inter-voxel and intra-voxel levels. Furthermore, a more detailed analysis of the voxel-specific properties generated by LaVCa reveals fine-grained functional differentiation within regions of interest (ROIs) in the visual cortex and voxels that simultaneously represent multiple distinct concepts. These findings offer profound insights into human visual representations by assigning detailed captions throughout the visual cortex while highlighting the potential of LLM-based methods in understanding brain representations. Please check out our webpage at https://sites.google.com/view/lavca-llm/

arXiv.org

DeepReinforce (@deep_reinforce)

CUDA-L1이 ICLR 2026에 채택되었다는 발표입니다. 본 연구는 CUDA 코드 생성에 강화학습(RL)을 처음 적용한 작업이며, 이후 CUDA-L2 등 후속 작업도 함께 언급했습니다. 연구 커뮤니티의 빠른 발전을 강조하며 추가 결과와 업데이트는 추후 공개될 예정이라는 내용입니다.

https://x.com/deep_reinforce/status/2015894636448149665

#cudal1 #cuda #reinforcementlearning #iclr2026 #codegeneration

DeepReinforce (@deep_reinforce) on X

🎉🎉CUDA-L1 is accepted to ICLR 2026! 🌟🌟This was our first work using RL for CUDA generation. Now we have CUDA-L2, alongside so much great work from the community. It’s amazing how fast the field has moved in just the past six months. 🦾🦾 Still cooking! stay tuned! 🔗Paper

X (formerly Twitter)