Social Futures Lab

193 Followers
8 Following
19 Posts
Reimagining social and collaborative systems to empower people and improve society. Based in Seattle, WA at UW CSE.
Websitehttps://social.cs.washington.edu

It seems obvious that we should prioritize addressing misinformation that is more harmful. But what makes something more likely to be harmful? Can we can reason about it before the harm has occurred?

📣 In an upcoming #CSCW2024 paper, we present a taxonomy of *Misinformation as a Harm* ➡️

New paper from our lab on reporting systems! https://arxiv.org/abs/2306.10478 Have you ever tried to report something on social media? What do you think gets shared and who do you think sees the report? We seek to understand what ppl *think actually happens* when they report something. 1/n
"Is Reporting Worth the Sacrifice of Revealing What I Have Sent?": Privacy Considerations When Reporting on End-to-End Encrypted Platforms

User reporting is an essential component of content moderation on many online platforms -- in particular, on end-to-end encrypted (E2EE) messaging platforms where platform operators cannot proactively inspect message contents. However, users' privacy concerns when considering reporting may impede the effectiveness of this strategy in regulating online harassment. In this paper, we conduct interviews with 16 users of E2EE platforms to understand users' mental models of how reporting works and their resultant privacy concerns and considerations surrounding reporting. We find that users expect platforms to store rich longitudinal reporting datasets, recognizing both their promise for better abuse mitigation and the privacy risk that platforms may exploit or fail to protect them. We also find that users have preconceptions about the respective capabilities and risks of moderators at the platform versus community level -- for instance, users trust platform moderators more to not abuse their power but think community moderators have more time to attend to reports. These considerations, along with perceived effectiveness of reporting and how to provide sufficient evidence while maintaining privacy, shape how users decide whether, to whom, and how much to report. We conclude with design implications for a more privacy-preserving reporting system on E2EE messaging platforms.

arXiv.org

When groups of people make collective judgments, it's not surprising to get uncertainty around the final answer.
Maybe the question is hard? Or perhaps people disagree?

But how do we approach this uncertainty to reach consensus?

Paper: http://arxiv.org/abs/2305.01615
#CSCW2023
(1/n)

Judgment Sieve: Reducing Uncertainty in Group Judgments through Interventions Targeting Ambiguity versus Disagreement

When groups of people are tasked with making a judgment, the issue of uncertainty often arises. Existing methods to reduce uncertainty typically focus on iteratively improving specificity in the overall task instruction. However, uncertainty can arise from multiple sources, such as ambiguity of the item being judged due to limited context, or disagreements among the participants due to different perspectives and an under-specified task. A one-size-fits-all intervention may be ineffective if it is not targeted to the right source of uncertainty. In this paper we introduce a new workflow, Judgment Sieve, to reduce uncertainty in tasks involving group judgment in a targeted manner. By utilizing measurements that separate different sources of uncertainty during an initial round of judgment elicitation, we can then select a targeted intervention adding context or deliberation to most effectively reduce uncertainty on each item being judged. We test our approach on two tasks: rating word pair similarity and toxicity of online comments, showing that targeted interventions reduced uncertainty for the most uncertain cases. In the top 10% of cases, we saw an ambiguity reduction of 21.4% and 25.7%, and a disagreement reduction of 22.2% and 11.2% for the two tasks respectively. We also found through a simulation that our targeted approach reduced the average uncertainty scores for both sources of uncertainty as opposed to uniform approaches where reductions in average uncertainty from one source came with an increase for the other.

arXiv.org
Congrats to @cqz for graduating with his PhD from UW CSE!! Jim is the first PhD graduate from @socialfutureslab and also my first PhD student! His dissertation was on "Understanding and Addressing Uncertainty of the Crowd", and you can read it here! ➡️ https://homes.cs.washington.edu/~cqz/dissertation.pdf
Listen to @kjfeng talk about his work on social media feed curation using machine teaching at the Knight Institute workshop on Algorithmic Amplification and Society! He was on a panel "Empirical look at user behavior" moderated by @Mor ➡️ https://www.youtube.com/watch?v=00pH6U_-s7g
Empirical look at user behavior (Day 2, Optimizing for What? Algorithmic Amplification and Society)

YouTube

Hey #FAccT2023! Please check out Teanna Barrett’s talk tomorrow on her paper:

"Skin Deep: Investigating Subjectivity in Skin Tone Annotations for Computer Vision Benchmark Datasets"
🔗 https://arxiv.org/abs/2305.09072
🗣️ Tuesday (7/13) @ 2:15pm CT in room W196A
📺 Talk video: https://drive.google.com/file/d/1Pn-q3xZjMNN4fLinb7DyGVCb1VtWZL3M/view

Teanna was an REU intern (!!) with us last summer, mentored by @cqz (also attending!) and will be starting grad school next fall! If you’re at the conference, go talk with them both!

Skin Deep: Investigating Subjectivity in Skin Tone Annotations for Computer Vision Benchmark Datasets

To investigate the well-observed racial disparities in computer vision systems that analyze images of humans, researchers have turned to skin tone as more objective annotation than race metadata for fairness performance evaluations. However, the current state of skin tone annotation procedures is highly varied. For instance, researchers use a range of untested scales and skin tone categories, have unclear annotation procedures, and provide inadequate analyses of uncertainty. In addition, little attention is paid to the positionality of the humans involved in the annotation process--both designers and annotators alike--and the historical and sociological context of skin tone in the United States. Our work is the first to investigate the skin tone annotation process as a sociotechnical project. We surveyed recent skin tone annotation procedures and conducted annotation experiments to examine how subjective understandings of skin tone are embedded in skin tone annotation procedures. Our systematic literature review revealed the uninterrogated association between skin tone and race and the limited effort to analyze annotator uncertainty in current procedures for skin tone annotation in computer vision evaluation. Our experiments demonstrated that design decisions in the annotation procedure such as the order in which the skin tone scale is presented or additional context in the image (i.e., presence of a face) significantly affected the resulting inter-annotator agreement and individual uncertainty of skin tone annotations. We call for greater reflexivity in the design, analysis, and documentation of procedures for evaluation using skin tone.

arXiv.org
Some new work from our lab relevant to those thinking about AI harms in the here and now! If you're worried about manipulated or synthetic image/video media spreading misinformation on social media, one approach that may get us out of the losing battle of detection is provenance. See thread: https://hci.social/@kjfeng/110465196619512609
Kevin Feng (@[email protected])

Attached: 1 image Imagine if you could see the edit history of images and videos on social media to make a better judgement about their credibility 🧐. We gave users this ability in our #CSCW2023 paper and measured changes in trust and accuracy perceptions: 📜 https://arxiv.org/abs/2303.12118 🧵 1/n

🌱 hci.social
Second event was the UW Allen undergrad research showcase! Simona and Andre presented their posters on the same projects as above, and Ryan, Lin, and Gloria presented their work on misinformation believability!
Our stellar undergrads in SFL presenting on the great research they did this year! First up we have the UW research symposium, with Pranati, Shreya, and Khushi sharing a poster on a new remote work tool, Andre on medical image uncertainty, and Simona on combating VR harassment!

Imagine if you could see the edit history of images and videos on social media to make a better judgement about their credibility 🧐. We gave users this ability in our #CSCW2023 paper and measured changes in trust and accuracy perceptions:

📜 https://arxiv.org/abs/2303.12118
🧵 1/n

Examining the Impact of Provenance-Enabled Media on Trust and Accuracy Perceptions

In recent years, industry leaders and researchers have proposed to use technical provenance standards to address visual misinformation spread through digitally altered media. By adding immutable and secure provenance information such as authorship and edit date to media metadata, social media users could potentially better assess the validity of the media they encounter. However, it is unclear how end users would respond to provenance information, or how to best design provenance indicators to be understandable to laypeople. We conducted an online experiment with 595 participants from the US and UK to investigate how provenance information altered users' accuracy perceptions and trust in visual content shared on social media. We found that provenance information often lowered trust and caused users to doubt deceptive media, particularly when it revealed that the media was composited. We additionally tested conditions where the provenance information itself was shown to be incomplete or invalid, and found that these states have a significant impact on participants' accuracy perceptions and trust in media, leading them, in some cases, to disbelieve honest media. Our findings show that provenance, although enlightening, is still not a concept well-understood by users, who confuse media credibility with the orthogonal (albeit related) concept of provenance credibility. We discuss how design choices may contribute to provenance (mis)understanding, and conclude with implications for usable provenance systems, including clearer interfaces and user education.

arXiv.org