🔍 Do privacy labels actually change how people behave online?

Our new paper, “Visual Privacy: The Impact of Privacy Labels on Privacy Behaviors Online,” published in ACM Transactions on Social Computing, investigates exactly this question—and provides both a scalable technical solution and empirical evidence.

👉 Full paper: https://dl.acm.org/doi/abs/10.1145/3804460

#Privacy #DataPrivacy #HCI #PrivacyPolicies #PrivacyProtection

Next was a compelling talk by Jin Ryong Kim on designing multisensory interfaces, particularly thermal-tactile integration at UCL Interaction Centre (UCLIC) https://www.youtube.com/watch?v=5nL8QV9GhIc (4/9) #HCI
UCLIC Seminar, 21 April 2026. Jin Ryong Kim (University of Texas at Dallas)

YouTube
The LLM Fallacy: Misattribution in AI-Assisted Cognitive Workflows
https://arxiv.org/abs/2604.14807
"a cognitive attribution error in which individuals misinterpret LLM-assisted outputs as evidence of their own independent competence, producing a systematic divergence between perceived and actual capability"
#AIEd #psy #hci #LLM
The LLM Fallacy: Misattribution in AI-Assisted Cognitive Workflows

The rapid integration of large language models (LLMs) into everyday workflows has transformed how individuals perform cognitive tasks such as writing, programming, analysis, and multilingual communication. While prior research has focused on model reliability, hallucination, and user trust calibration, less attention has been given to how LLM usage reshapes users' perceptions of their own capabilities. This paper introduces the LLM fallacy, a cognitive attribution error in which individuals misinterpret LLM-assisted outputs as evidence of their own independent competence, producing a systematic divergence between perceived and actual capability. We argue that the opacity, fluency, and low-friction interaction patterns of LLMs obscure the boundary between human and machine contribution, leading users to infer competence from outputs rather than from the processes that generate them. We situate the LLM fallacy within existing literature on automation bias, cognitive offloading, and human--AI collaboration, while distinguishing it as a form of attributional distortion specific to AI-mediated workflows. We propose a conceptual framework of its underlying mechanisms and a typology of manifestations across computational, linguistic, analytical, and creative domains. Finally, we examine implications for education, hiring, and AI literacy, and outline directions for empirical validation. We also provide a transparent account of human--AI collaborative methodology. This work establishes a foundation for understanding how generative AI systems not only augment cognitive performance but also reshape self-perception and perceived expertise.

arXiv.org

We are delighted to announce that the ACM Conference on Human Factors in Computing Systems will be held May 10-14, 2027 in Pittsburgh, Pennsylvania, USA.

Teaser: https://www.youtube.com/watch?v=ZyKljdJ3CJ4
Learn more: https://chi2027.acm.org/

#CHI2027 #HCI #Pittsburgh #ACMCHI

In 1989, an NYT reporter went to SIGCHI and the paper commissioned this adorable little illustration by Stuart Goldenberg for the write-up. Someone put this on a hat for me?

Lewis, Peter H. “PERSONAL COMPUTERS; New Ways To Interact Electronically.” Science. *The New York Times*, May 9, 1989. https://www.nytimes.com/1989/05/09/science/personal-computers-new-ways-to-interact-electronically.html.

#hci #retrocomputing #illustration #drawing

➡️ Just got back from five days at #CHI2026 in Barcelona, and I am still sorting through notes and a long list of papers I want to read properly.

It was nice to see so many familiar faces again, people I usually only run into at conferences like CHI, and to meet new ones along the way. We talked about everything from things close to my own research to topics pretty far from it.

#PersonalizedReality #HCI #AI #personalization
1/3

I listed some references I've found regarding HCI in building automation / artificial intelligence in systems. Highly relevant to situation awareness as well.

https://arttuv.com/writings/ironies-of-automation-and-ai/

#automation #hci #ai #sa

The Ironies of Automation and Artificial Intelligence | Arttu Viljakainen

Pointing to learnings from decades of automation research that we should take into account when building LLM-powered systems.

Arttu Viljakainen

Saarland Informatics Campus is at #CHI2026 this week, as the second largest contributor among in German universities!

With 22 contributions from 24 researchers, out researchers brought the following awards home:

🏆x3 Best Paper
🏆X1 SIGCHI Outstanding Dissertation

🔗 Read more: https://sic.link/chi2026

#humancomputerinteraction #SIC #saarlanduniversity #HCI #ACM

📢 Deadline Extended for #MuC2026 Practitioner Track!

🗓️ New Deadline: May 17, 2026 (AoE)
🗓️ Acceptance Notification: June 15, 2026
✅ Topics: Case studies, design methods, reflections, and more.

Whether you're exploring societal change or sharing "lessons learned" from the field, we want to hear from you!

Details & Submission:
🔗 https://muc2026.mensch-und-computer.de/submission/up-practitioner-track/

#HCI #UX #PractitionerTrack #UserExperience #TransformingInteractions

🥽🤖 New at RC Trust: Manshul Belani!
She works on human-centered AI and XR – focusing on usability, accessibility, and inclusive design 🌍.
How should we design trustworthy immersive systems? 💬
https://rc-trust.ai/news/news-detail/designing-technology-around-people
#HCI #HumanCenteredAI #XR #RCTrust