Hermann Blum

39 Followers
66 Following
29 Posts
roboticist, postdoc @ ETH Zürich
websitehttps://hermannblum.net

It takes a while to make fancy #NeRF animations, so I am very happy we can now share our upcoming #CVPR paper with video and code release:
A big debate in #ContinualLearning is how to scale to many experiences. This work shows how well NeRF-based compression can scale to store robotic experiences over many consecutive deployments, much better than storing checkpoints of your model.

website: https://ethz-asl.github.io/ucsa_neural_rendering/
#Robotics #SceneUnderstanding

Unsupervised Continual Semantic Adaptation through Neural Rendering

Unsupervised Continual Semantic Adaptation through Neural Rendering

A little thing that I enjoy every time I walk into the lab: We got yellow filament and can now print all our robot addons in matching colors 😃🦾
Part of a great conference is a well organized submission process. That‘s why I am getting more and more disappointed with RSS. They announced a demo track, but wanted to publish detailed submission info only a month before the deadline. This Jan 4 announcement was missed and officially pushed to Jan 8, which they also missed. Today we have 2 weeks before the submission deadline and still no info about the demo track. And no response to emails to the chair.
Yesterday, our students had their big demo day showing off their concepts on how to link robots to mixed reality devices. Lots of cool ideas!
RSIPvision featured my phd thesis in their newest magazine :)
https://rsipvision.com/ComputerVisionNews-2022November/28/
They are also the first to print the awesome comic that Simon Gies gifted to me ⬇️
Computer Vision News - November 2022

The magazine of the algorithm community

2 weeks ago I defended my PhD thesis „Self-improving, open-world robotic scene understanding“. Thank you @[email protected] @[email protected] for amazing 4.5 years in such a great environment and so many possibilities to collaborate with others.
Never before have so many if the authors who submitted to Fishyscapes been in the same room! Very cool workshop in Zagreb on Robust Scene Understanding organised by Uni Zagreb and Uni Wuppertal!
Good things take time: After good feedback at the NeurIPS WS our continual, self-supervised domain adaptation for generic indoor semantics is now available on RA-L!
Main work was done by Jonas, who is now at RSL.
https://doi.org/10.1109/LRA.2022.3203812
Continual Adaptation of Semantic Segmentation Using Complementary 2D-3D Data Representations

Semantic segmentation networks are usually pre-trained once and not updated during deployment. As a consequence, misclassifications commonly occur if the distribution of the training data deviates from the one encountered during the robot's operation. We propose to mitigate this problem by adapting the neural network to the robot's environment during deployment, without any need for external supervision. Leveraging complementary data representations, we generate a supervision signal, by probabilistically accumulating consecutive 2D semantic predictions in a volumetric 3D map. We then train the network on renderings of the accumulated semantic map, effectively resolving ambiguities and enforcing multi-view consistency through the 3D representation. In contrast to scene adaptation methods, we aim to retain the previously-learned knowledge, and therefore employ a continual learning experience replay strategy to adapt the network. Through extensive experimental evaluation, we show successful adaptation to real-world indoor scenes both on the ScanNet dataset and on in-house data recorded with an RGB-D sensor. Our method increases the segmentation accuracy on average by 9.9% compared to the fixed pre-trained neural network, while retaining knowledge from the pre-training dataset.

Since I am #newhere I want to try a different approach than twitter: More behind-the-scenes info from my robotics research.
This is a little robot I‘ve been working on with my colleagues & students. Usually in robotics we make experiments once for a paper. Even tough we research autonomy, each paper is only a few aspects and the rest is scripted. The goal of this robot is to build a reliable autonomy stack such that it can regularly drive around in our lab. It just made its first steps 🍾🦾🦿