🧵 [1/n]

All AI systems make mistakes.

🧐 What if users could leverage AI flaws to understand it & take informed actions?

🚀 Our #CSCW2024 paper on Seamful XAI offers a process to foresee, locate, & leverage AI flaws—boosting user understanding & agency.

📜 https://arxiv.org/pdf/2211.06753

But why should you care?⤵️
1/n

w/ @Riedl @hal Samir Passi @qveraliao #academia #HCI

🧵 [2/n]

Explainable AI (XAI) Implications:

⚡️ Seams offer 'peripheral vision' of factors beyond the algorithm & reveal AI’s blind spots.

💡 This helps users unpack the 'why-not'—vital for handling AI failures.

🎯 Seamful XAI extends traditional XAI from the 'why' to the 'why-not'

🧵 [3/n]

Responsible AI (RAI) implications:

✅ Tackles gaps in current RAI methods—good at spotting risks, but weak on action.

✨ Our process empowers users to foresee AI harms, uncover respective seams, locate them in the AI lifecycle, & leverage them to meet user goals.

🧵 [4/n]

Impact:
🚀 Incorporated into NIST's AI Risk Management Framework, a globally adopted RAI framework

🏢 Adopted by major companies for GenAI/LLM red-teaming.

🥇 First work to operationalize seamful design in (X)AI

Here's a real case that illustrates why this matters ⤵️

🧵 [5/n]

🏦 Real case: AI denies Ahmed's loan despite his great record. Why?

🕵️ A hidden seam: AI uses old 3-loan limit, unaware of new 5-loan policy. Ahmed has 4.

💡 If Nadia knew this mismatch, she could contest the AI's decision.

❓These mismatches are seams, let's define them.

🧵 [6/n]

🧩 Seams in AI= mismatches between dev assumptions & real-world use. Think of them as cracks where AI stumbles in the real world-- AI's blind spots.

📚 Examples of seams: data context shift (trained on US data, deployed in Bangladesh), policy shift, regulatory changes, etc.

🧵 [7/n]

🎯 The essence of seamful design (roots in Ubicomp): don’t just reveal seams, leverage them. But why?

💪 To augment user agency.

⚡️ For XAI, agency is operationalized as actionability, contestability, appropriation.

🤔 Seamful design is great, but there’s a challenge…

🧵 [8/n]

🚧 Here's the challenge: Seams aren't explicit, not easy to find, and using them is hard.

🧭 People need a process.

🛠️ That's where our methodological contribution comes in – a design process to help people find + leverage seams.

🧵 [9/n]

📐 The Seamful XAI design process has 3 steps
1️⃣ Envisioning AI oopsies (breakdowns)
2️⃣ Anticipating and locating seams
3️⃣ Filtering seams to enhance explainability and user agency

Let's break these down ⤵️
Step 1️⃣: Envisioning harms– here, think like a supervillain! (Yes, really!)
• Ask: what could go wrong?
• Play a supervillain and make the breakdowns happen!
This adversarial thinking helps anticipate potential issues.

🧵 [10/n]

Step 2️⃣: Anticipating, locating, and crafting seams
• Find the seam = what could cause the breakdowns?
• Tether it = where are they located in the AI's lifecycle?
• Craft it = what's the gap between expectation and reality?

Step 3️⃣: Filtering relevant seams to enhance explainability and user agency
• Filter relevant seams = which seams do we show, which do we hide? =
• Justify why seams are shown = how does each improve actionability, contestability, appropriation?

🧵 [11/n]

But is the process any good?

🔬 We evaluated this process through a scenario-based design interview study with 43 participants across 9 countries and 6 domains.

Here’re the key findings ⤵️

🎖️ The design process was teachable and transferrable—everyone could craft seams and apply them to their fields.

🥳 The surprise hit? Roleplaying as a 'supervillain' made it fun and engaging!

🌟 But the biggest takeaway was learning how to leverage seams.

🧵 [12/n]

Findings:

💪 Seamful XAI empowers users by expanding their options—giving them a voice, unlike seamless AI’s ‘take it or leave it’ model.

🚀 The process shifts AI design from reactive patching to a proactive one, empowering users to handle fallouts from AI failures better

🧵 [13/n]

Practical tips

💡 Think sociotechnically—seams often appear where social and technical factors intersect.

💡 Common seam hotspots: Data context shifts, model drifts, policy changes, factors beyond AI's training data or algorithmic scope.

🧵 [14/n]

💌 On a personal note, my biggest joy was working with @hal @qveraliao Samir Passi @Riedl. It felt like playing in a band where everyone is on song

⚒️ Years in the making, this project has been tough. Translating the abstract concept of seams into applied AI was anything but trivial.

🧵 [15/15]

🤗 Immensely grateful to the participants, Microsoft Research's FATE group, the loan officers for crafting scenarios, & Matthew Chalmers for discussing seamful design with me (fun fact: his work was my intro to seamful design).

Paper link: https://arxiv.org/pdf/2211.06753

#academia #AI #HCI