I am working on a UX audit agent for my company (https://fuguux.com/).
I passed in the output of our audit agent into Claude code for my personal website and...voila, a refreshed and much better site. Surprised it worked!
Before -> after
I am working on a UX audit agent for my company (https://fuguux.com/).
I passed in the output of our audit agent into Claude code for my personal website and...voila, a refreshed and much better site. Surprised it worked!
Before -> after
In short: Quantifying privacy risks can help users make more informed decisions—but the UX needs to present risks in a manner that is interpretable and actionable to truly *empower* users, rather than scare them.
Thanks @NSF for supporting this work!
Finding #1: PREs often *shifted perspective*.
In ~74% of reflections, participants expected higher privacy awareness / risk concern.
…but awareness came with emotional costs.
Many participants anticipated anxiety, frustration, or feeling stuck about trade-offs.
The core design question:
How should PREs be presented so they help people make better disclosure decisions… *without* nudging them into unnecessary self-censorship?
We don't want people to stop posting — we want them to make informed disclosure decisions accounting for risks.
This paper explores how to present “population risk estimates” (PREs): an AI-driven estimate of how uniquely identifiable you are based on your disclosures.
Smaller “k” means you're more identifiable (e.g., k=1 means only 1 person matches everything you have disclosed)
Interestingly, no single UI for presenting PREs to users “won”.
Participants didn’t show a strong overall preference across the five designs (though “risk by disclosure” tended to be liked more; the meter less).
So what *should* PRE designs do? 4 design recommendations:
…but sometimes PREs encouraged self-censorship.
A meaningful chunk of reflections ended with deleting the post, not posting at all, or even leaving the platform.
The 5 concepts ranged from:
(1) raw k-anonymity score
(2) a re-identifiability “meter”
(3) low/med/high simplified risk
(4) threat-specific risk
(5) “risk by disclosure” (which details contribute most)
This paper is the latest of a productive collaboration between my lab, @cocoweixu, and @alan_ritter.
ACL'24 -> a SOTA self-disclosure detection model
CSCW'25 -> a human-AI collaboration study of disclosure risk mitigation
NeurIPS'25 -> a method to quantify self-disclosure risk