Vera Liao

@qveraliao@hci.social
66 Followers
18 Following
10 Posts
Researcher @ Microsoft Research FATE group studying human-AI interaction. She/her.

@jenn @qveraliao @PrincetonCS @hci @citp

"Fostering Appropriate Reliance on LLMs" received an Honorable Mention at #CHI2025

This work is also the last chapter of my dissertation, so the recognition feels more special🏅🎓😊

🎉 to the team!!!

https://programs.sigchi.org/chi/2025/program/content/188664

Conference Programs

Appropriate reliance is key to safe and successful user interactions with LLMs. But what shapes user reliance on LLMs, and how can we foster appropriate reliance?

In our #CHI2025 paper, we explore these questions through two user studies.

1/7

As #AI gains a growing space in creation and art, how are the public discourses on AI in the arts shaping creative work?

It what we investigate in a new paper with @Katecrawford, @qveraliao, Gonzalo Ramos and Jenny Williams: arxiv.org/abs/2502.03940

🧵 [1/n]

All AI systems make mistakes.

🧐 What if users could leverage AI flaws to understand it & take informed actions?

🚀 Our #CSCW2024 paper on Seamful XAI offers a process to foresee, locate, & leverage AI flaws—boosting user understanding & agency.

📜 https://arxiv.org/pdf/2211.06753

But why should you care?⤵️
1/n

w/ @Riedl @hal Samir Passi @qveraliao #academia #HCI

New paper to share! 📣 @qveraliao and I lay out our vision of a human-centered research roadmap for “AI Transparency in the Age of LLMs.”

https://arxiv.org/abs/2306.01941

There's lots of talk about the responsible development and deployment of LLMs, but transparency (including model reporting, explanations, uncertainty communication, and more) is often missing from this discourse.

We hope this framing will spark more discussion and research.

Attempting my first mastodon thread below... 🧵

AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap

The rise of powerful large language models (LLMs) brings about tremendous opportunities for innovation but also looming risks for individuals and society at large. We have reached a pivotal moment for ensuring that LLMs and LLM-infused applications are developed and deployed responsibly. However, a central pillar of responsible AI -- transparency -- is largely missing from the current discourse around LLMs. It is paramount to pursue new approaches to provide transparency for LLMs, and years of research at the intersection of AI and human-computer interaction (HCI) highlight that we must do so with a human-centered perspective: Transparency is fundamentally about supporting appropriate human understanding, and this understanding is sought by different stakeholders with different goals in different contexts. In this new era of LLMs, we must develop and design approaches to transparency by considering the needs of stakeholders in the emerging LLM ecosystem, the novel types of LLM-infused applications being built, and the new usage patterns and challenges around LLMs, all while building on lessons learned about how people process, interact with, and make use of information. We reflect on the unique challenges that arise in providing transparency for LLMs, along with lessons learned from HCI and responsible AI research that has taken a human-centered perspective on AI transparency. We then lay out four common approaches that the community has taken to achieve transparency -- model reporting, publishing evaluation results, providing explanations, and communicating uncertainty -- and call out open questions around how these approaches may or may not be applied to LLMs. We hope this provides a starting point for discussion and a useful roadmap for future research.

arXiv.org

Join the 2nd Explainable AI for CV (XAI4CV) workshop at #CVPR2023!
https://xai4cv.github.io/workshop_cvpr23

👥 The workshop will take place on June 19th in Vancouver, Canada. More to follow on in-person/hybrid setup

🎤 We have a fantastic line-up of speakers: @qveraliao, Mohit Bansal, Marina M.-C. Höhne (née Vidovic), @arvind, Alice Xiang, and @davidbau

📄 Submit papers and demos by March 14th (proceedings track) and May 19th (non-proceedings track)

We are accepting applications for a postdoc job with us at MSR in Cambridge MA. Super sweet gig, link to apply is here

https://careers.microsoft.com/us/en/job/1488454/Post-Doc-Researcher-Socio-Technical-Systems-Microsoft-Research

Post Doc Researcher – Socio-Technical Systems – Microsoft Research in Cambridge, Massachusetts, United States | Research, Applied, & Data Sciences at Microsoft

Apply for Post Doc Researcher – Socio-Technical Systems – Microsoft Research job with Microsoft in Cambridge, Massachusetts, United States. Research, Applied, & Data Sciences at Microsoft

Microsoft
📢The FATE (Fairness, Accountability, Transparenfy and Ethics in AI) group at Microsoft Research Montreal is hiring interns for 2023! Looking for candidates with broad FATE interests including responsible NLP/NLG, human-centered AI, AI transparency and explainability, and future of work. Apply here: https://careers.microsoft.com/us/en/job/1488252/Stagiaire-de-recherche-FATE-Research-Intern-FATE-Montreal-Fairness-Accountability-Transparency-and-Ethics-in-AI
#FATE #HCI #AI #NLP
×

🧵 [1/n]

All AI systems make mistakes.

🧐 What if users could leverage AI flaws to understand it & take informed actions?

🚀 Our #CSCW2024 paper on Seamful XAI offers a process to foresee, locate, & leverage AI flaws—boosting user understanding & agency.

📜 https://arxiv.org/pdf/2211.06753

But why should you care?⤵️
1/n

w/ @Riedl @hal Samir Passi @qveraliao #academia #HCI

🧵 [2/n]

Explainable AI (XAI) Implications:

⚡️ Seams offer 'peripheral vision' of factors beyond the algorithm & reveal AI’s blind spots.

💡 This helps users unpack the 'why-not'—vital for handling AI failures.

🎯 Seamful XAI extends traditional XAI from the 'why' to the 'why-not'

🧵 [3/n]

Responsible AI (RAI) implications:

✅ Tackles gaps in current RAI methods—good at spotting risks, but weak on action.

✨ Our process empowers users to foresee AI harms, uncover respective seams, locate them in the AI lifecycle, & leverage them to meet user goals.

🧵 [4/n]

Impact:
🚀 Incorporated into NIST's AI Risk Management Framework, a globally adopted RAI framework

🏢 Adopted by major companies for GenAI/LLM red-teaming.

🥇 First work to operationalize seamful design in (X)AI

Here's a real case that illustrates why this matters ⤵️

🧵 [5/n]

🏦 Real case: AI denies Ahmed's loan despite his great record. Why?

🕵️ A hidden seam: AI uses old 3-loan limit, unaware of new 5-loan policy. Ahmed has 4.

💡 If Nadia knew this mismatch, she could contest the AI's decision.

❓These mismatches are seams, let's define them.

🧵 [6/n]

🧩 Seams in AI= mismatches between dev assumptions & real-world use. Think of them as cracks where AI stumbles in the real world-- AI's blind spots.

📚 Examples of seams: data context shift (trained on US data, deployed in Bangladesh), policy shift, regulatory changes, etc.

🧵 [7/n]

🎯 The essence of seamful design (roots in Ubicomp): don’t just reveal seams, leverage them. But why?

💪 To augment user agency.

⚡️ For XAI, agency is operationalized as actionability, contestability, appropriation.

🤔 Seamful design is great, but there’s a challenge…

🧵 [8/n]

🚧 Here's the challenge: Seams aren't explicit, not easy to find, and using them is hard.

🧭 People need a process.

🛠️ That's where our methodological contribution comes in – a design process to help people find + leverage seams.

🧵 [9/n]

📐 The Seamful XAI design process has 3 steps
1️⃣ Envisioning AI oopsies (breakdowns)
2️⃣ Anticipating and locating seams
3️⃣ Filtering seams to enhance explainability and user agency

Let's break these down ⤵️
Step 1️⃣: Envisioning harms– here, think like a supervillain! (Yes, really!)
• Ask: what could go wrong?
• Play a supervillain and make the breakdowns happen!
This adversarial thinking helps anticipate potential issues.

🧵 [10/n]

Step 2️⃣: Anticipating, locating, and crafting seams
• Find the seam = what could cause the breakdowns?
• Tether it = where are they located in the AI's lifecycle?
• Craft it = what's the gap between expectation and reality?

Step 3️⃣: Filtering relevant seams to enhance explainability and user agency
• Filter relevant seams = which seams do we show, which do we hide? =
• Justify why seams are shown = how does each improve actionability, contestability, appropriation?

🧵 [11/n]

But is the process any good?

🔬 We evaluated this process through a scenario-based design interview study with 43 participants across 9 countries and 6 domains.

Here’re the key findings ⤵️

🎖️ The design process was teachable and transferrable—everyone could craft seams and apply them to their fields.

🥳 The surprise hit? Roleplaying as a 'supervillain' made it fun and engaging!

🌟 But the biggest takeaway was learning how to leverage seams.

🧵 [12/n]

Findings:

💪 Seamful XAI empowers users by expanding their options—giving them a voice, unlike seamless AI’s ‘take it or leave it’ model.

🚀 The process shifts AI design from reactive patching to a proactive one, empowering users to handle fallouts from AI failures better

🧵 [13/n]

Practical tips

💡 Think sociotechnically—seams often appear where social and technical factors intersect.

💡 Common seam hotspots: Data context shifts, model drifts, policy changes, factors beyond AI's training data or algorithmic scope.

🧵 [14/n]

💌 On a personal note, my biggest joy was working with @hal @qveraliao Samir Passi @Riedl. It felt like playing in a band where everyone is on song

⚒️ Years in the making, this project has been tough. Translating the abstract concept of seams into applied AI was anything but trivial.

🧵 [15/15]

🤗 Immensely grateful to the participants, Microsoft Research's FATE group, the loan officers for crafting scenarios, & Matthew Chalmers for discussing seamful design with me (fun fact: his work was my intro to seamful design).

Paper link: https://arxiv.org/pdf/2211.06753

#academia #AI #HCI