Ronald van Loon (@Ronald_vanLoon)
로봇이 놀이 상대가 되는 사례를 다룬 글. 인간-로봇 상호작용(HRI)과 사회적 로봇의 가능성, 교육·치료·엔터테인먼트 등 응용을 조명하며 로봇이 감정적·사회적 역할을 수행하는 방향을 탐구함. 저자: @lukas_m_ziegler.
Ronald van Loon (@Ronald_vanLoon)
로봇이 놀이 상대가 되는 사례를 다룬 글. 인간-로봇 상호작용(HRI)과 사회적 로봇의 가능성, 교육·치료·엔터테인먼트 등 응용을 조명하며 로봇이 감정적·사회적 역할을 수행하는 방향을 탐구함. 저자: @lukas_m_ziegler.
Today at ICSR25 - 17th International Conference on Social Robotics, I presented the paper "RAGGAE for HERBS: Testing the Explanatory Performance of Ontology-powered LLMs for Human Explanation of Robotic Behaviors" by Agnese Augello, Edoardo Datteri, Antonio Lieto, Maria Rausa and Nicola Zagni
Title: RAGGAE for HERBS: Testing the Explanatory Performance of Ontology-powered LLMs for Human Explanation of Robotic Behaviors
Abstract:
In this work we present and test a RAG-based model called RAGGAE (i.e. RAG for the General Analysis of Explanans) tested in the context of Human Explanation of Robotic BehaviorS (HERBS).
The RAGGAE model makes use of an ontology of explanations, enriching the knowledge of state of the art general purpose Large Language Models like Google Gemini 2.0 Flash, DeepSeek R1 and GPT-4o. The results show that the combination of a general LLM with a symbolic, and philosophically grounded, ontology can be a useful instrument to improve the investigation, identification and the analysis of the types of explanations that humans use to verbalize - and make sense of - the behavior of robotic agents.
Paper: https://www.ciitlab.org/RAGGAE4HERBS_ICSR2025.pdf
System Live: https://www.ciitlab.org/agent.html
Index Terms: #artificialintelligence #HumanRobotInteraction #explanation #largelanguaagemodels #rag #socialrobotics #robots #humanexplanation #cognitivesystems #LLM
Very happy to see that our recent paper on how/if robot gaze affects human gaze behavior is gaining attention! Already 50 downloads and over 1800 views!
Link to paper: https://www.frontiersin.org/articles/10.3389/frobt.2023.1127626/full
Gaze cues serve an important role in facilitating human conversations and are generally considered to be one of the most important non-verbal cues. Gaze cues are used to manage turn-taking, coordinate joint attention, regulate intimacy, and signal cognitive effort. In particular, it is well established that gaze aversion is used in conversations to avoid prolonged periods of mutual gaze. Given the numerous functions of gaze cues, there has been extensive work on modelling these cues in social robots. Researchers have also tried to identify the impact of robot gaze on human participants. However, the influence of robot gaze behavior on human gaze behavior has been less explored. We conducted a within-subjects user study (N = 33) to verify if a robot’s gaze aversion influenced human gaze aversion behavior. Our results show that participants tend to avert their gaze more when the robot keeps staring at them as compared to when the robot exhibits well-timed gaze aversions. We interpret our findings in terms of intimacy regulation: humans try to compensate for the robot’s lack of gaze aversion.
The International Journal of #SocialRobotics has published "A Communicative Perspective on Human–Robot Collaboration in Industry: Mapping Communicative Modes on Collaborative Scenarios", an article by Brigitte Krenn and Stephanie Gross.
Read more: https://www.ofai.at/news/2023-04-17sr
So it happens we just published a paper... What is it about?
https://link.springer.com/article/10.1007/s12369-022-00942-6
#PaperPublished #ReinforcementLearning #Robotics #Navigation #SocialRobotics
We present a new neuro-inspired reinforcement learning architecture for robot online learning and decision-making during both social and non-social scenarios. The goal is to take inspiration from the way humans dynamically and autonomously adapt their behavior according to variations in their own performance while minimizing cognitive effort. Following computational neuroscience principles, the architecture combines model-based (MB) and model-free (MF) reinforcement learning (RL). The main novelty here consists in arbitrating with a meta-controller which selects the current learning strategy according to a trade-off between efficiency and computational cost. The MB strategy, which builds a model of the long-term effects of actions and uses this model to decide through dynamic programming, enables flexible adaptation to task changes at the expense of high computation costs. The MF strategy is less flexible but also 1000 times less costly, and learns by observation of MB decisions. We test the architecture in three experiments: a navigation task in a real environment with task changes (wall configuration changes, goal location changes); a simulated object manipulation task under human teaching signals; and a simulated human–robot cooperation task to tidy up objects on a table. We show that our human-inspired strategy coordination method enables the robot to maintain an optimal performance in terms of reward and computational cost compared to an MB expert alone, which achieves the best performance but has the highest computational cost. We also show that the method makes it possible to cope with sudden changes in the environment, goal changes or changes in the behavior of the human partner during interaction tasks. The robots that performed these experiments, whether real or virtual, all used the same set of parameters, thus showing the generality of the method.