Michelle Lam

126 Followers
79 Following
7 Posts
PhD student @ Stanford HCI!
social computing, human-centered AI, algorithmic fairness (+ dance, design, doodling!) | she/her
Websitehttp://michelle123lam.github.io
Twitter@michelle123lam

ML model fairness issues are often addressed with ethical frameworks beforehand or metrics, audits, and post-hoc fixes after the fact. Model sketching brings this normative thinking into the development process, grounding it in functional models and realistic data insights.

Thanks so much to my co-authors: awesome undergrad interns Zixian Ma, Anne Li, Ulo Freitas; collaborator Dakuo Wang; and my wonderful advisors @landay, @msbernst!! See the paper at: https://hci.stanford.edu/publications/2023/Lam_ModelSketching_CHI23.pdf
(6/6)

Our tool helped ML practitioners create multiple models for detecting hateful memes in just 30 mins, exploring 130+ concepts. Instead of tunneling on technical details, they focused on user harms to inform their models and identified data representativity & labeling issues.
(5/6)
Leveraging zero-shot capabilities of pretrained models (GPT, CLIP), our ModelSketchBook Python package enables rapid model design exploration. Users can interactively specify concepts, combine them in sketches, run them on data, and directly iterate with new concept ideas.
(4/6)
We introduce model sketching, a technical framework that lets ML practitioners express modeling ideas purely in terms of human-understandable concepts, but brings them to life as functional “sketch” models that they can evaluate and iterate on in early-stage model design.
(3/6)
For example, what factors should our model consider in a content moderation task? Profanity, bullying, sexism? How do we define them? Are they sufficient? If we could iterate on models in these terms, we could grapple with their human impact & ethical implications early on.
(2/6)
When building ML models, we often get pulled into technical implementation details rather than deliberating over critical normative questions. What if we could directly articulate the high-level ideas that models ought to reason over? #CHI2023 🧵
(1/6)