03 ASIMOV'S SECOND LAW OF ROBOTICS EXPLAINED #50LAM_ARTIFICIAL_INTELLIGENCE
#AsimovsLaws #SecondLawOfRobotics #AIethics #RobotObedience #IsaacAsimov #HumanRobotInteraction #AIphilosophy #RoboticsFuture #ScienceFiction #TechEthics
03 ASIMOV'S SECOND LAW OF ROBOTICS EXPLAINED #50LAM_ARTIFICIAL_INTELLIGENCE
#AsimovsLaws #SecondLawOfRobotics #AIethics #RobotObedience #IsaacAsimov #HumanRobotInteraction #AIphilosophy #RoboticsFuture #ScienceFiction #TechEthics
MoVEInt: Mixture of Variational Experts for Learning Human-Robot Interactions from Demonstrations
Authors: Vignesh Prasad, Alap Kshirsagar, Dorothea Koert, Ruth Stock-Homburg, Jan Peters, Georgia Chalvatzaki
pre-print -> https://arxiv.org/abs/2407.07636
website -> https://sites.google.com/view/moveint/home
#robotics #deep_learning #vae #mdn #hri #humanrobotinteraction
Shared dynamics models are important for capturing the complexity and variability inherent in Human-Robot Interaction (HRI). Therefore, learning such shared dynamics models can enhance coordination and adaptability to enable successful reactive interactions with a human partner. In this work, we propose a novel approach for learning a shared latent space representation for HRIs from demonstrations in a Mixture of Experts fashion for reactively generating robot actions from human observations. We train a Variational Autoencoder (VAE) to learn robot motions regularized using an informative latent space prior that captures the multimodality of the human observations via a Mixture Density Network (MDN). We show how our formulation derives from a Gaussian Mixture Regression formulation that is typically used approaches for learning HRI from demonstrations such as using an HMM/GMM for learning a joint distribution over the actions of the human and the robot. We further incorporate an additional regularization to prevent "mode collapse", a common phenomenon when using latent space mixture models with VAEs. We find that our approach of using an informative MDN prior from human observations for a VAE generates more accurate robot motions compared to previous HMM-based or recurrent approaches of learning shared latent representations, which we validate on various HRI datasets involving interactions such as handshakes, fistbumps, waving, and handovers. Further experiments in a real-world human-to-robot handover scenario show the efficacy of our approach for generating successful interactions with four different human interaction partners.
It's seems almost ridiculous to juxtapose our relatively benign #AI discussions (e.g. impact on work, future of the education, etc.) with discussions about whether or not an AI agent can be relied upon to correctly identify and execute enemy #combatants in a #warzone. As with a lot of AI evaluations these days, the findings do not appear to be good.... 😬
https://www.nature.com/articles/s41598-024-69771-z #ArtificialIntelligence #DecisionMaking #robots #HumanRobotInteraction
This research explores prospective determinants of trust in the recommendations of artificial agents regarding decisions to kill, using a novel visual challenge paradigm simulating threat-identification (enemy combatants vs. civilians) under uncertainty. In Experiment 1, we compared trust in the advice of a physically embodied versus screen-mediated anthropomorphic robot, observing no effects of embodiment; in Experiment 2, we manipulated the relative anthropomorphism of virtual robots, observing modestly greater trust in the most anthropomorphic agent relative to the least. Across studies, when any version of the agent randomly disagreed, participants reversed their threat-identifications and decisions to kill in the majority of cases, substantially degrading their initial performance. Participants’ subjective confidence in their decisions tracked whether the agent (dis)agreed, while both decision-reversals and confidence were moderated by appraisals of the agent’s intelligence. The overall findings indicate a strong propensity to overtrust unreliable AI in life-or-death decisions made under uncertainty.
🚀 Excited to share our latest work! 🚀
"AdaptiX - A Transitional XR Framework for Development and Evaluation of Shared Control Applications in Assistive Robotics" is now published!
👉 https://dl.acm.org/doi/10.1145/3660243
I am excited to present our work at the EICS 2024 conference in #Cagliari, Italy on June 28th.
#AssistiveTech #Robotics #Human #Robot #HRI #XR #MR #AR #VR #AdaptiX #Research #HumanRobotInteraction #Technology #Accessibility #SharedControl
With the ongoing efforts to empower people with mobility impairments and the increase in technological acceptance by the general public, assistive technologies, such as collaborative robotic arms, are gaining popularity. Yet, their widespread success is ...
Stanford’s HumanPlus Robot Learns Skills by Watching Humans: A New Era in Robotics.
See here - https://techchilli.com/news/stanfords-humanplus-robot-learns-skills-by-watching-humans/
#Stanford #HumanPlus #Robotics #AI #Innovation #TechRevolution #HumanoidRobot #FutureTech #RobotLearning #AIResearch #TechInnovation #HumanRobotInteraction #RoboticSkills
Stanford researchers develop HumanPlus, a humanoid robot that mimics human movements and learns new skills by observation. Discover how this innovative robot can perform tasks like playing piano and ping-pong.
Meet Emo, the robot with a knack for anticipating human smiles and grinning back at just the right moment. Developed by a team at Columbia University, Emo is equipped with generative AI technology that analyzes facial expressions and predicts smiles before they happen.
#Robotics #AI #HumanRobotInteraction #Emo #TechInnovation #FutureTech #GenerativeAI #FacialExpressions #TechNews
Although robots cannot replace human caregivers, they can provide support so that caregivers have more time to provide the personal, human touch with a project like EDAN, a CYBATHLON Assistance Robot Race team from the German Aerospace Center (DLR). https://shorturl.at/HOWXY
#CYBATHLON #robotics #roboticsystems #humanmachineinteraction #humanrobotinteraction #assistivetechnology #technologiecalinnovation #ForAWorldWithoutBarriers