193 Followers
17 Following
101 Posts
I work on human-centered {security|privacy|computing}. Assistant Professor at CMU HCII, adjunct at GT IC.

I am working on a UX audit agent for my company (https://fuguux.com/).

I passed in the output of our audit agent into Claude code for my personal website and...voila, a refreshed and much better site. Surprised it worked!

Before -> after

https://sauvik.me

Finding #1: PREs often *shifted perspective*.
In ~74% of reflections, participants expected higher privacy awareness / risk concern.

…but awareness came with emotional costs.
Many participants anticipated anxiety, frustration, or feeling stuck about trade-offs.

The core design question:

How should PREs be presented so they help people make better disclosure decisions… *without* nudging them into unnecessary self-censorship?

We don't want people to stop posting — we want them to make informed disclosure decisions accounting for risks.

This paper explores how to present “population risk estimates” (PREs): an AI-driven estimate of how uniquely identifiable you are based on your disclosures.

Smaller “k” means you're more identifiable (e.g., k=1 means only 1 person matches everything you have disclosed)

The 5 concepts ranged from:

(1) raw k-anonymity score
(2) a re-identifiability “meter”
(3) low/med/high simplified risk
(4) threat-specific risk
(5) “risk by disclosure” (which details contribute most)

This paper is the latest of a productive collaboration between my lab, @cocoweixu, and @alan_ritter.

ACL'24 -> a SOTA self-disclosure detection model
CSCW'25 -> a human-AI collaboration study of disclosure risk mitigation
NeurIPS'25 -> a method to quantify self-disclosure risk

📣 New at #CHI2026
People share sensitive things “anonymously”… but anonymity is hard to reason about.

What if we could quantify re-identification risk with AI? How should we present those AI-estimated risks to users?

Led by my student Isadora Krsek

Paper: https://www.sauvik.me/papers/70/serve

Method: speculative design + design fictions.

We storyboarded 5 PRE UI concepts using comic-boards (different ways to show risk + what’s driving it).

📣 New at #CHI2026

Developing a new AI product? How would you figure out what are the privacy risks?

Privy help non-privacy expert practitioners create high quality privacy impact assessments for early-stage AI products.

Led by Hank Lee
Paper: https://www.sauvik.me/papers/69/serve

In prior work, we introduced a taxonomy of AI privacy risks (CHI'24 best paper) and found that practitioners face significant awareness, motivation, and ability barriers when engaging in AI privacy work (USENIX SEC distinguished paper).

Privy is a follow up to this line of work.