Today @cvg releases something that has been in the works for nearly a year: Using mixed reality to display robot maps in real-time and control robots through drag-and-drop.
🔗 https://arxiv.org/abs/2310.02392
📺 https://youtu.be/H3IA5FXnFX8
Julia Chen will present this later today at the #IROS2023 #iros23 workshop on mixed reality (free stream on website): https://sites.google.com/view/xr-robotics-iros2023/
This paper presents a mixed-reality human-robot teaming system. It allows human operators to see in real-time where robots are located, even if they are not in line of sight. The operator can also visualize the map that the robots create of their environment and can easily send robots to new goal positions. The system mainly consists of a mapping and a control module. The mapping module is a real-time multi-agent visual SLAM system that co-localizes all robots and mixed-reality devices to a common reference frame. Visualizations in the mixed-reality device then allow operators to see a virtual life-sized representation of the cumulative 3D map overlaid onto the real environment. As such, the operator can effectively "see through" walls into other rooms. To control robots and send them to new locations, we propose a drag-and-drop interface. An operator can grab any robot hologram in a 3D mini map and drag it to a new desired goal pose. We validate the proposed system through a user study and real-world deployments. We make the mixed-reality application publicly available at https://github.com/cvg/HoloLens_ros.
Congratulations to Effie Daum for winning a Best Video Award at #IROS2023!
Title: Benchmarking ground truth trajectories with robotic total stations
Link to the full video: https://youtu.be/sx0W6JCG9vI?feature=shared
The price was awarded during the Workshop on Reproducible Robotics Research organized by the IEEE Technical Committee for Performance Evaluation & Benchmarking Of Robotic And Automation Systems.
Next week, David Morilla will be at #IROS2023 presenting our work on "Robust Fusion for Bayesian Semantic Mapping" with Lorenzo Mur and Eduardo Montijano.
A new method to map the environment using a Bayesian neural network as a "semantic sensor".
The integration of semantic information in a map allows robots to understand better their environment and make high-level decisions. In the last few years, neural networks have shown enormous progress in their perception capabilities. However, when fusing multiple observations from a neural network in a semantic map, its inherent overconfidence with unknown data gives too much weight to the outliers and decreases the robustness. To mitigate this issue we propose a novel robust fusion method to combine multiple Bayesian semantic predictions. Our method uses the uncertainty estimation provided by a Bayesian neural network to calibrate the way in which the measurements are fused. This is done by regularizing the observations to mitigate the problem of overconfident outlier predictions and using the epistemic uncertainty to weigh their influence in the fusion, resulting in a different formulation of the probability distributions. We validate our robust fusion strategy by performing experiments on photo-realistic simulated environments and real scenes. In both cases, we use a network trained on different data to expose the model to varying data distributions. The results show that considering the model's uncertainty and regularizing the probability distribution of the observations distribution results in a better semantic segmentation performance and more robustness to outliers, compared with other methods. Video - https://youtu.be/5xVGm7z9c-0