@zbucinca's latest #chi2025 paper shows that people who receive AI decision recommendations supported by contrastive explanations (choose A instead of B because..) help people grow their skills. But this only happens if the alternative (the B in the contrastive explanations) is something that people would plausibly consider. This important because earlier work showed that people do not learn when AI provides conventional explanations (reasons for/against a decision).
https://iis.seas.harvard.edu/papers/bucinca2025contrastive.pdf
Congratulations to (soon to be) Dr. Zana Buçinca (@zbucinca) for defending her dissertation yesterday!
In her PhD, Zana demonstrated that human cognitive engagement moderates the effectiveness of AI support in human decision making, she introduced cognitive forcing functions, and has launched the new sub-field of worker-centric AI.
Her upcoming #CHI2025 paper on Contrastive Explanations That Anticipate Human Misconceptions exemplifies this latest direction in her work.
https://iis.seas.harvard.edu/papers/bucinca2025contrastive.pdf
interested in accessibility, visualization, and data ethics? i'm recruiting phd students to join the Data & Design Group at CU Boulder.
we're building a collaborative and inclusive space for people to grow into interdisciplinary researchers of technology and society.
Have you developed a system and are now writing about it for submission to a Human-Computer Interaction (HCI) venue?
Participate in our study and kick-start your manuscript.
Participants will be expected to (a) interact with a set of design features while planning, writing and revising a section of their system-based HCI manuscript, (b) consent to screen and audio recording, and
(c) complete brief surveys and interviews.
(1/n)
On deskilling, we always knew; it's the AI hypers who want us to forget: "This experience has demonstrated that it is impossible to create an absolutely reliable automatic system, and sooner or later people face the necessity to act after equipment fails." — Valentina Ponomareva
Recruiting Survey Participants!
We are running a study on whether Mastodon feeds show us posts that align with our goals of using social media 🙂
Estimated time: 10 mins
Participate for $5 compensation! (currently US only)
We'll try to build a tool to filter Mastodon posts to help us fulfill our goals of using social media. We will experiment with many methods (rule-based, topic models, large language models) to see if they can achieve this 🤔
Click here: https://experiments.braids.social/