Representation learning often emphasizes metric preservation. We instead build Symplectic structural invariance directly into the representation.

https://arxiv.org/abs/2512.19409

We embed Hamiltonian/symplectic geometry by making the RNN state dynamics a symplectomorphism, which preserves Legendre duality (information geometry) through time. This yields structure-preserving representations enforced by the latent dynamics, rather than imposed indirectly via the output.

#ReservoirComputing #RepresentationLearning #InformationGeometry #SymplecticGeometry #HamiltonianDynamics #GeometricDeepLearning #DynamicalSystems #PhysicsInformedML

Symplectic Reservoir Representation of Legendre Dynamics

Modern learning systems act on internal representations of data, yet how these representations encode underlying physical or statistical structure is often left implicit. In physics, conservation laws of Hamiltonian systems such as symplecticity guarantee long-term stability, and recent work has begun to hard-wire such constraints into learning models at the loss or output level. Here we ask a different question: what would it mean for the representation itself to obey a symplectic conservation law in the sense of Hamiltonian mechanics? We express this symplectic constraint through Legendre duality: the pairing between primal and dual parameters, which becomes the structure that the representation must preserve. We formalize Legendre dynamics as stochastic processes whose trajectories remain on Legendre graphs, so that the evolving primal-dual parameters stay Legendre dual. We show that this class includes linear time-invariant Gaussian process regression and Ornstein-Uhlenbeck dynamics. Geometrically, we prove that the maps that preserve all Legendre graphs are exactly symplectomorphisms of cotangent bundles of the form "cotangent lift of a base diffeomorphism followed by an exact fibre translation". Dynamically, this characterization leads to the design of a Symplectic Reservoir (SR), a reservoir-computing architecture that is a special case of recurrent neural network and whose recurrent core is generated by Hamiltonian systems that are at most linear in the momentum. Our main theorem shows that every SR update has this normal form and therefore transports Legendre graphs to Legendre graphs, preserving Legendre duality at each time step. Overall, SR implements a geometrically constrained, Legendre-preserving representation map, injecting symplectic geometry and Hamiltonian mechanics directly at the representational level.

arXiv.org
Indian-American mathematician C R Rao awarded International Prize in Statistics for revolutionary work - mindvoice

Facebook Youtube Instagram Twitter Whatsapp English தமிழ் / Login News World India Tamilnadu Reports Interview Special Articles Education What to Study? Where to Study? Distance Education Online Education Educational Institutions Scientific Developements Employment Government Private Competitive Exams Free Coachings Life Style Fashion Beauty Tips Health Kitchen Gadgets Astrology Zodiac Predictions Daily Predictions Weekly Predictions Monthly

mindvoice - the news from healthy mind

Manifold Alignment

In this article, I go through a brief overview of how to use manifold alignment for the unification of multiple datasets.

https://towardsdatascience.com/manifold-alignment-c67fc3fc1a1c

#manifold #informationgeometry #datascience #ai #machinelearning #statistics #mathematics #python

Manifold Alignment - Towards Data Science

Manifold alignment is a problem of finding a common latent space where we jointly perform dimensionality reduction on multiple datasets that preserves any correspondence between those datasets…

Towards Data Science
Another recent paper with @miyamotohk: The Fisher–Rao loss for learning under label noise https://link.springer.com/article/10.1007/s41884-022-00076-8
#NeuralNetworks #MachineLearning #InformationGeometry
The Fisher–Rao loss for learning under label noise - Information Geometry

Choosing a suitable loss function is essential when learning by empirical risk minimisation. In many practical cases, the datasets used for training a classifier may contain incorrect labels, which prompts the interest for using loss functions that are inherently robust to label noise. In this paper, we study the Fisher–Rao loss function, which emerges from the Fisher–Rao distance in the statistical manifold of discrete distributions. We derive an upper bound for the performance degradation in the presence of label noise, and analyse the learning speed of this loss. Comparing with other commonly used losses, we argue that the Fisher–Rao loss provides a natural trade-off between robustness and training dynamics. Numerical experiments with synthetic and MNIST datasets illustrate this performance.

SpringerLink
Just got an early Christmas gift! #optimization, #informationgeometry, #differentialgeometry
Just got an early Christmas gift! #optimization, #informationgeometry, #deeplearning

Who am I, and why am I here? #introduction

I am a machine learning researcher, using tools from #Bayes, #stats, #optimization, #informationgeometry, #deeplearning, signal processing, etc.

I care deeply about people, their well-being, inclusion, diversity, equity, privacy, and justice.

I believe in slow and rigorous scientific process, to add value to existing knowledge, and improve positive impact on society.

I am here to learn about all of these.

More about at https://emtiyaz.github.io/

Retweeting information geometry journal
📽️Shun-ichi Amari Interview (2021) by @INNSociety
Part 1. How did you get into the field? (02:39)
Part 2. What is the most significant accomplishments? (04:29)
Part 3. What are you working on now? (08:15) ... and more!
#informationgeometry
See here
https://www.youtube.com/embed/jk6fe5j47qM?start=109
YouTube