\[
\underbrace{\forall f, g \colon \operatorname{cov}[f(X), g(Y)] = 0}_{\text{独立性の条件}} \underset{(f=g=\operatorname{id})}{\implies} \underbrace{\operatorname{cov}[X, Y] = 0}_{無相関性の条件}
\]
\[
\underbrace{\forall f, g \colon \operatorname{cov}[f(X), g(Y)] = 0}_{\text{独立性の条件}} \underset{(f=g=\operatorname{id})}{\implies} \underbrace{\operatorname{cov}[X, Y] = 0}_{無相関性の条件}
\]
\[
\operatorname{PMI}(\boldsymbol{y}) = \log p(\boldsymbol{y}) - \log \prod_{j=1}^q p(y_j)
\]
相互情報量は初めから同時確率分布と周辺確率分布のKL divergenceから天降り式に定義された方が、綺麗だし分かりやすいと思う。多変数の相互情報量の定義式は:
\[
I(\boldsymbol{X}) = D_{\mathrm{KL}} \left(P_{\boldsymbol{X}} \middle\| \bigotimes_{X \in \boldsymbol{X}} P_X \right)
\]
Exploiting Information Theory for Intuitive Robot Programming of Manual Activities
Authors: Elena Merlo, Marta Lagomarsino, Edoardo Lamon, Arash Ajoudani
pre-print -> https://arxiv.org/abs/2410.23963
#robotics #dataset #information_theory #shannon #observational_learning
Observational learning is a promising approach to enable people without expertise in programming to transfer skills to robots in a user-friendly manner, since it mirrors how humans learn new behaviors by observing others. Many existing methods focus on instructing robots to mimic human trajectories, but motion-level strategies often pose challenges in skills generalization across diverse environments. This paper proposes a novel framework that allows robots to achieve a higher-level understanding of human-demonstrated manual tasks recorded in RGB videos. By recognizing the task structure and goals, robots generalize what observed to unseen scenarios. We found our task representation on Shannon's Information Theory (IT), which is applied for the first time to manual tasks. IT helps extract the active scene elements and quantify the information shared between hands and objects. We exploit scene graph properties to encode the extracted interaction features in a compact structure and segment the demonstration into blocks, streamlining the generation of Behavior Trees for robot replicas. Experiments validated the effectiveness of IT to automatically generate robot execution plans from a single human demonstration. Additionally, we provide HANDSOME, an open-source dataset of HAND Skills demOnstrated by Multi-subjEcts, to promote further research and evaluation in this field.
Your #genetic_code has lots of 'words' for the same thing—#information_theory may help explain the #redundancies.
Many of the amino acids that make up proteins are encoded by genetic material in more than one way. An information theorist explains how principles of nature may account for this variance.