Am Mittwoch präsentierte ein Forschungsteam von #Disney auf der IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2023 in Detroit einen brandneuen Robotercharakter während ihrer Abend-Keynote.
#robot #roboter #iros2023 #robots

https://spectrum.ieee.org/disney-robot

How Disney Packed Big Emotion Into a Little Robot

Melding animation and reinforcement learning for free-ranging emotive performances

IEEE Spectrum
One of my favorite papers from IROS 2023 was "Navlie: A Python Package for State Estimation on Lie Groups". I enjoyed getting to chat with the author and am excited because the package provides #Python implementations of estimators (including Kalman filters). I look forward to following the development of the package
Docs: https://decargroup.github.io/navlie/_build/html/index.html
#robotics #stateestimation #kalmanfilter #iros #IROS2023
Welcome to navlie! — navlie 0.1.0 documentation

Goodbye #iros2023 it was a pleasure to meet you all. Many thanks to the organization team!

Today @cvg releases something that has been in the works for nearly a year: Using mixed reality to display robot maps in real-time and control robots through drag-and-drop.
🔗 https://arxiv.org/abs/2310.02392
📺 https://youtu.be/H3IA5FXnFX8

Julia Chen will present this later today at the #IROS2023 #iros23 workshop on mixed reality (free stream on website): https://sites.google.com/view/xr-robotics-iros2023/

A 3D Mixed Reality Interface for Human-Robot Teaming

This paper presents a mixed-reality human-robot teaming system. It allows human operators to see in real-time where robots are located, even if they are not in line of sight. The operator can also visualize the map that the robots create of their environment and can easily send robots to new goal positions. The system mainly consists of a mapping and a control module. The mapping module is a real-time multi-agent visual SLAM system that co-localizes all robots and mixed-reality devices to a common reference frame. Visualizations in the mixed-reality device then allow operators to see a virtual life-sized representation of the cumulative 3D map overlaid onto the real environment. As such, the operator can effectively "see through" walls into other rooms. To control robots and send them to new locations, we propose a drag-and-drop interface. An operator can grab any robot hologram in a 3D mini map and drag it to a new desired goal pose. We validate the proposed system through a user study and real-world deployments. We make the mixed-reality application publicly available at https://github.com/cvg/HoloLens_ros.

arXiv.org
#IROS2023 is almost over! Luckily we will extend the exciting time at Detroit by giving our workshop on Everday Activity Manipulation in an Interactive Learning Environment https://ease-crc.org/teaching-cognition-enabled-cognitive-robotics-in-an-integrated-learning-environment/
Everyday Activity Robot Manipulation in an Interactive Learning Environment – EASE Collaborative Research Center

tomorrow at #IROS2023 #IROS :
@jixing will present our work on multi-modal failure compensation in perception at the Frontiers in Vision & Learning workshop (https://arxiv.org/abs/2110.02549)
and
my student Julia Chen from @cvg will present our new mixed-reality robot interface at the mixed-reality workshop (stay tuned for video + arxiv in the next days)
Sven Behnke from the University of Bonn is giving his keynote at the right now at #IROS2023. Exciting talk about all the obstacles teleoperation and robotics face in competitions.
Today, we had nice discussions in Teaching and Training Students for Cognitive Robotics at #iros2023. The result of our workshop is not surprisingly diverse as the field is itself. Many thanks to Michael Beetz, Chad Jenkins, Karinne Ramirez, Jean Oh, Arthur Niedzwiecki, David Vernon, and all the attendees. Looking forward for an exciting week at Detroit.

Congratulations to Effie Daum for winning a Best Video Award at #IROS2023!

Title: Benchmarking ground truth trajectories with robotic total stations
Link to the full video: https://youtu.be/sx0W6JCG9vI?feature=shared

The price was awarded during the Workshop on Reproducible Robotics Research organized by the IEEE Technical Committee for Performance Evaluation & Benchmarking Of Robotic And Automation Systems.

Benchmarking ground truth trajectories with robotic total stations.

YouTube

Next week, David Morilla will be at #IROS2023 presenting our work on "Robust Fusion for Bayesian Semantic Mapping" with Lorenzo Mur and Eduardo Montijano.

A new method to map the environment using a Bayesian neural network as a "semantic sensor".

Paper: https://arxiv.org/abs/2303.07836

Robust Fusion for Bayesian Semantic Mapping

The integration of semantic information in a map allows robots to understand better their environment and make high-level decisions. In the last few years, neural networks have shown enormous progress in their perception capabilities. However, when fusing multiple observations from a neural network in a semantic map, its inherent overconfidence with unknown data gives too much weight to the outliers and decreases the robustness. To mitigate this issue we propose a novel robust fusion method to combine multiple Bayesian semantic predictions. Our method uses the uncertainty estimation provided by a Bayesian neural network to calibrate the way in which the measurements are fused. This is done by regularizing the observations to mitigate the problem of overconfident outlier predictions and using the epistemic uncertainty to weigh their influence in the fusion, resulting in a different formulation of the probability distributions. We validate our robust fusion strategy by performing experiments on photo-realistic simulated environments and real scenes. In both cases, we use a network trained on different data to expose the model to varying data distributions. The results show that considering the model's uncertainty and regularizing the probability distribution of the observations distribution results in a better semantic segmentation performance and more robustness to outliers, compared with other methods. Video - https://youtu.be/5xVGm7z9c-0

arXiv.org