@collaborativeai

15 Followers
1 Following
14 Posts
The Collaborative Artificial Intelligence (CAI) Group, headed by Prof. Dr. Andreas Bulling, is within the Department of Computer Science at the University of Stuttgart, Germany. Our group conducts fundamental research towards collaborative artificial intelligence at the intersection of multimodal machine learning, computational cognitive modelling, computer vision, and human-machine interaction.
Websitecollaborative-ai.org

We are excited to share that two of our papers have been accepted to ETRA 2026!

1. QualitEye: Public and Privacy-preserving Gaze Data Quality Verification
Mayar Elfares, Pascal Reisert, Ralf Kรผsters, Andreas Bulling

2. Learning Alignments of Human Gaze and Fine-grained Task Descriptions
Takumi Nishiyasu, Zhiming Hu, Andreas Bulling, Yoichi Sato

Congratulations to all authors!

For preprints and updates, feel free to visit our website: https://www.collaborative-ai.org/

#ETRA2026 #EyeTracking #HCI

Collaborative Artificial Intelligence

Our group conducts fundamental research towards collaborative artificial intelligence (CAI) at the intersection of multimodal machine learning, computational cognitive modelling, computer vision, and human-machine interaction.

Excited to share that our paper โ€œRobustSpring: Benchmarking Robustness to Image Corruptions for Optical Flow, Scene Flow and Stereoโ€ has been accepted to #ICLR2026 ๐ŸŽ‰.

We introduce RobustSpring, a new benchmark that evaluates not only accuracy but also robustness of optical flow, scene flow, and stereo models under 20 realโ€‘world image corruptions.

Congratulations to the authors!

๐ŸŒ For more news: https://www.collaborative-ai.org/publications/oei26_iclr/

Which part of graphs do people look at when solving analytical tasks?

๐Ÿ“ฐ Our work "Towards a Better Understanding of Graph Perception in Immersive Environments" was accepted to Graph Drawing #GD2025.

Congratulations to the authors!

Learn more about this work from our website: https://www.collaborative-ai.org/publications/zhang25_gd/

๐ŸŒŸ A month of milestones! ๐Ÿฐ ๐—ฝ๐—ฎ๐—ฝ๐—ฒ๐—ฟ๐˜€ got accepted at different venues.

๐Ÿ“„ ๐—œ๐— ๐—ช๐—จ๐—ง: โ€œ๐˜›๐˜ฉ๐˜ณ๐˜ฐ๐˜ถ๐˜จ๐˜ฉ ๐˜ต๐˜ฉ๐˜ฆ ๐˜Œ๐˜บ๐˜ฆ๐˜ด ๐˜ฐ๐˜ง ๐˜Œ๐˜ฎ๐˜ฐ๐˜ต๐˜ช๐˜ฐ๐˜ฏ: ๐˜ˆ ๐˜”๐˜ถ๐˜ญ๐˜ต๐˜ช-๐˜ง๐˜ข๐˜ค๐˜ฆ๐˜ต๐˜ฆ๐˜ฅ ๐˜Œ๐˜บ๐˜ฆ ๐˜›๐˜ณ๐˜ข๐˜ค๐˜ฌ๐˜ช๐˜ฏ๐˜จ ๐˜‹๐˜ข๐˜ต๐˜ข๐˜ด๐˜ฆ๐˜ต ๐˜ง๐˜ฐ๐˜ณ ๐˜Œ๐˜ฎ๐˜ฐ๐˜ต๐˜ช๐˜ฐ๐˜ฏ ๐˜™๐˜ฆ๐˜ค๐˜ฐ๐˜จ๐˜ฏ๐˜ช๐˜ต๐˜ช๐˜ฐ๐˜ฏ ๐˜ช๐˜ฏ ๐˜๐˜ช๐˜ณ๐˜ต๐˜ถ๐˜ข๐˜ญ ๐˜™๐˜ฆ๐˜ข๐˜ญ๐˜ช๐˜ต๐˜บโ€

๐Ÿ“„ ๐—œ๐—ฅ๐—ข๐—ฆ: โ€œ๐˜๐˜ฏ๐˜ต๐˜ฆ๐˜ณ๐˜ข๐˜ค๐˜ต๐˜ช๐˜ท๐˜ฆ ๐˜Œ๐˜น๐˜ฑ๐˜ณ๐˜ฆ๐˜ด๐˜ด๐˜ช๐˜ท๐˜ฆ ๐˜”๐˜ฐ๐˜ต๐˜ช๐˜ฐ๐˜ฏ ๐˜Ž๐˜ฆ๐˜ฏ๐˜ฆ๐˜ณ๐˜ข๐˜ต๐˜ช๐˜ฐ๐˜ฏ ๐˜œ๐˜ด๐˜ช๐˜ฏ๐˜จ ๐˜‹๐˜บ๐˜ฏ๐˜ข๐˜ฎ๐˜ช๐˜ค ๐˜”๐˜ฐ๐˜ท๐˜ฆ๐˜ฎ๐˜ฆ๐˜ฏ๐˜ต ๐˜—๐˜ณ๐˜ช๐˜ฎ๐˜ช๐˜ต๐˜ช๐˜ท๐˜ฆ๐˜ดโ€

๐Ÿ“„ ๐—œ๐—–๐——๐—”๐—ฅ: โ€œ๐˜ˆ๐˜ต๐˜ต๐˜ฆ๐˜ฏ๐˜ต๐˜ช๐˜ฐ๐˜ฏ๐˜“๐˜ฆ๐˜ข๐˜ฌ: ๐˜ž๐˜ฉ๐˜ข๐˜ต ๐˜‹๐˜ฐ๐˜ฆ๐˜ด ๐˜๐˜ถ๐˜ฎ๐˜ข๐˜ฏ ๐˜ˆ๐˜ต๐˜ต๐˜ฆ๐˜ฏ๐˜ต๐˜ช๐˜ฐ๐˜ฏ ๐˜™๐˜ฆ๐˜ท๐˜ฆ๐˜ข๐˜ญ ๐˜ˆ๐˜ฃ๐˜ฐ๐˜ถ๐˜ต ๐˜๐˜ฏ๐˜ง๐˜ฐ๐˜ณ๐˜ฎ๐˜ข๐˜ต๐˜ช๐˜ฐ๐˜ฏ ๐˜๐˜ช๐˜ด๐˜ถ๐˜ข๐˜ญ๐˜ช๐˜ด๐˜ข๐˜ต๐˜ช๐˜ฐ๐˜ฏ?โ€

๐Ÿ“„ ๐—ฉ๐—œ๐—ฆ: โ€œ๐˜›๐˜ฆ๐˜ญ๐˜ญ ๐˜”๐˜ฆ ๐˜ž๐˜ช๐˜ต๐˜ฉ๐˜ฐ๐˜ถ๐˜ต ๐˜›๐˜ฆ๐˜ญ๐˜ญ๐˜ช๐˜ฏ๐˜จ ๐˜”๐˜ฆ: ๐˜›๐˜ธ๐˜ฐ-๐˜ž๐˜ข๐˜บ ๐˜—๐˜ณ๐˜ฆ๐˜ฅ๐˜ช๐˜ค๐˜ต๐˜ช๐˜ฐ๐˜ฏ ๐˜ฐ๐˜ง ๐˜๐˜ช๐˜ด๐˜ถ๐˜ข๐˜ญ๐˜ช๐˜ป๐˜ข๐˜ต๐˜ช๐˜ฐ๐˜ฏ ๐˜“๐˜ช๐˜ต๐˜ฆ๐˜ณ๐˜ข๐˜ค๐˜บ ๐˜ข๐˜ฏ๐˜ฅ ๐˜๐˜ช๐˜ด๐˜ถ๐˜ข๐˜ญ ๐˜ˆ๐˜ต๐˜ต๐˜ฆ๐˜ฏ๐˜ต๐˜ช๐˜ฐ๐˜ฏโ€

https://www.collaborative-ai.org/

Collaborative Artificial Intelligence

Our group conducts fundamental research towards collaborative artificial intelligence (CAI) at the intersection of multimodal machine learning, computational cognitive modelling, computer vision, and human-machine interaction.

๐Ÿš€ Exciting News! ๐Ÿš€

HOIGaze: Gaze Estimation During Hand-Object Interactions in Extended Reality has been accepted to #SIGGRAPH 2025! ๐ŸŽ‰

HOIGaze introduces:1๏ธโƒฃ A hierarchical framework that first identifies which hand the user is visually attending to, then estimates gaze direction based on the handโ€™s posture. 2๏ธโƒฃ A gaze estimation network that combines graph neural networks and cross-modal Transformers. 3๏ธโƒฃ An eye-head coordination loss function.

๐Ÿ” Learn more: https://collaborative-ai.org/publications/hu25_siggraph/

Collaborative Artificial Intelligence

Our group conducts fundamental research towards collaborative artificial intelligence (CAI) at the intersection of multimodal machine learning, computational cognitive modelling, computer vision, and human-machine interaction.

๐ŸŽ‰ Exciting News! ๐ŸŽ‰

We're thrilled to share that our group's proposal, "๐— ๐˜‚๐—น๐˜๐—ถ๐— ๐—ฒ๐—ฑ๐—ถ๐—ฎ๐˜๐—ฒ: ๐— ๐˜‚๐—น๐˜๐—ถ๐—บ๐—ผ๐—ฑ๐—ฎ๐—น ๐—•๐—ฒ๐—ต๐—ฎ๐˜ƒ๐—ถ๐—ผ๐˜‚๐—ฟ ๐—”๐—ป๐—ฎ๐—น๐˜†๐˜€๐—ถ๐˜€ ๐—ณ๐—ผ๐—ฟ ๐—”๐—ฟ๐˜๐—ถ๐—ณ๐—ถ๐—ฐ๐—ถ๐—ฎ๐—น ๐— ๐—ฒ๐—ฑ๐—ถ๐—ฎ๐˜๐—ถ๐—ผ๐—ป", has been accepted as a Grand Challenge at ACM Multimedia 2025!๐Ÿš€

"The goal of this multi-year challenge is to contribute to realising the vision of autonomous artificial mediators by measurable advances in key conversational behaviour sensing and analysis tasks."

For more details, visit https://www.multimediate-challenge.org/.

#MM25

MultiMediate:Multi-modal Group Behaviour Analysis for Artificial Mediation

Grand Challenge at ACM MMโ€™25

๐ŸŒŸ Exciting News! ๐ŸŒŸ Our groupโ€™s paper ๐—ฉยฒ๐——๐—ถ๐—ฎ๐—น: ๐—จ๐—ป๐—ถ๐—ณ๐—ถ๐—ฐ๐—ฎ๐˜๐—ถ๐—ผ๐—ป ๐—ผ๐—ณ ๐—ฉ๐—ถ๐—ฑ๐—ฒ๐—ผ ๐—ฎ๐—ป๐—ฑ ๐—ฉ๐—ถ๐˜€๐˜‚๐—ฎ๐—น ๐——๐—ถ๐—ฎ๐—น๐—ผ๐—ด ๐˜ƒ๐—ถ๐—ฎ ๐— ๐˜‚๐—น๐˜๐—ถ๐—บ๐—ผ๐—ฑ๐—ฎ๐—น ๐—˜๐˜…๐—ฝ๐—ฒ๐—ฟ๐˜๐˜€ has been accepted at CVPR 2025! ๐ŸŽ‰๐Ÿ“š

VยฒDial is a novel model specifically designed to handle both image and video input data for multimodal conversational tasks. Extensive evaluations on AVSD and VisDial datasets show that VยฒDial achieves new state-of-the-art results across multiple benchmarks.

Congratulations to the authors. ๐Ÿ™Œ

#CVPR2025 #ComputerVision #DeepLearning

๐Ÿ“ข New Paper Alert! ๐Ÿ“ข

We're thrilled to announce that our paper, HAIFAI: Human-AI Interaction for Mental Face Reconstruction, has been accepted by ACM Transactions on Interactive Intelligent Systems (TiiS).

Congratulations to the authors!

You can check out the preprint on arXiv https://arxiv.org/abs/2412.06323v1 and stay tuned for the camera-ready version on our website https://collaborative-ai.org/.

HAIFAI: Human-AI Collaboration for Mental Face Reconstruction

We present HAIFAI - a novel collaborative human-AI system to tackle the challenging task of reconstructing a visual representation of a face that exists only in a person's mind. Users iteratively rank images presented by the AI system based on their resemblance to a mental image. These rankings, in turn, allow the system to extract relevant image features, fuse them into a unified feature vector, and use a generative model to reconstruct the mental image. We also propose an extension called HAIFAI-X that allows users to manually refine and further improve the reconstruction using an easy-to-use slider interface. To avoid the need for tedious human data collection for model training, we introduce a computational user model of human ranking behaviour. For this, we collected a small face ranking dataset through an online crowd-sourcing study containing data from 275 participants. We evaluate HAIFAI and HAIFAI-X in a 12-participant user study and show that HAIFAI outperforms the previous state of the art regarding reconstruction quality, usability, perceived workload, and reconstruction speed. HAIFAI-X achieves even better reconstruction quality at the cost of reduced usability, perceived workload, and increased reconstruction time. We further validate the reconstructions in a subsequent face ranking study with 18 participants and show that HAIFAI-X achieves a new state-of-the-art identification rate of 60.6%. These findings represent a significant advancement towards developing new collaborative intelligent systems capable of reliably and effortlessly reconstructing a user's mental image.

arXiv.org

๐ŸŽ‰ Exciting News! ๐ŸŽ‰

Our group has two papers (conditionally) accepted to #CHI2025.

1๏ธโƒฃ SummAct: Uncovering User Intentions Through Interactive Behaviour Summarisation

2๏ธโƒฃ How People Read Charts: A Model of Task-driven Eye Movement Control

Stay tuned for more details about the papers at: https://collaborative-ai.org/news/2025/01/two-paper-accepted-at-chi/

Collaborative Artificial Intelligence

Our group conducts fundamental research towards collaborative artificial intelligence (CAI) at the intersection of multimodal machine learning, computational cognitive modelling, computer vision, and human-machine interaction.

๐ŸŽ‰We are thrilled to share that our lab director Prof. @abulling has joined the editorial board of the IEEE Transactions on Visualization and Computer Graphics (TVCG) as of January 1, 2025.

Congratulations, Prof. Bulling! ๐ŸŒŸ