Turns out aria-glasses are a very useful tool to demonstrate actions to robots: Based on egocentric video, we track dynamic changes in a scene graph and use the representation to replay or plan interactions for robots.
πŸ”— https://behretj.github.io/LostAndFound/
πŸ“„ https://arxiv.org/abs/2411.19162

#robotics #computervision #mobilemanipulation #CV #ethzurich #unibonn #lamarrinstitute

Lost & Found: Updating Dynamic 3D Scene Graphs from Egocentric Observations

Are you also a bit exhausted after #ICRA submission week? Let us brighten your day with a real "SpotLight" πŸ’‘

πŸ”— https://timengelbracht.github.io/SpotLight/
πŸ“„ https://arxiv.org/abs/2409.11870

We detect and generate interaction for almost any light switch and can then map which switch turns on which light.

#Robotics #ETH #UniBonn #LamarrInstitute

SpotLight: Robotic Scene Understanding through Interaction and Affordance Detection

Deformable Neural Radiance Fields creates free-viewpoint portraits (nerfies) from casually captured videos.

Not an Aprilβ€˜s fool, even tough it feels a bit surreal: Iβ€˜m incredibly grateful for the chance to start my own research lab today as junior professor at University of Bonn and the Lamarr Institute!

If you are interested to work on machine learning for robot perception: I have open PhD positions from October onwards.

Treat to myself: I get to create a new website 🀩 stay tuned!

#UniBonn #Bonn #LamarrInstitute #robotics #robotperception #computervision