147 Followers
44 Following
36 Posts
Data Scientist & Moral Psychologist. Now @ http://PsychOfTech.org + USC Neely Center. Former civic integrity/AI/Newsfeed @ Facebook, Cofounder @ Ranker/CivilPolitics.

PANEL 4: Reform (Part 1)

With @__lucab at UC Berkeley, @riyer at Psychology of Technology Institute, @yoyoel at UC Berkeley, and moderator Camille Francois at Columbia University.

Join us on April 28-29 for "Optimizing for What? Algorithmic Amplification and Society" – an event curated w/ @randomwalker exploring how online amplification works, what can be done to mitigate the harms and take advantage of the benefits. RSVP: https://www.eventbrite.com/e/optimizing-for-what-algorithmic-amplification-and-society-tickets-558764247907
Optimizing for What? Algorithmic Amplification and Society

A two-day symposium exploring algorithmic amplification and distortion as well as potential interventions

Eventbrite
@chris +1 to all these platforms being activitypub compatible. I'm not sure if Mastodon will reach critical mass or be good enough to replace Twitter. But I would bet that all these systems combined could make something with escape velocity. Project Narwhal might be an interesting fork as well: https://www.thenarwhalproject.com/
The Narwhal Project

The Narwhal Project

Reddit just opened their call for Data Science interns this summer, with teams recruiting for MS and PhD students. I'm specifically looking for a PhD student to work with me on Community -- check out this Reddit post to learn more!

https://www.reddit.com/r/CompSocial/comments/109am3a/love_reddit_why_dont_you_intern_there_call_for/

Love Reddit? Why don't you intern there? Call for Reddit Data Science Interns just opened!

Reddit is recruiting for Data Science interns for a range of projects ranging from ads to safety to community-building. Interns are being...

reddit
To build on this work showing how deprecating engagement incentives for sensitive topics improves outcomes for users and society, I'm hopeful we can 1) Audit algorithms across platforms for perverse engagement incentives (see this post for more on how: https://psychoftech.substack.com/p/defining-meaningful-algorithmic-transparency), 2) Align on a societal definition of sensitive content - This work relies on a common definition of what content we feel should or should not be optimized for engagement. Ideally, these lines are drawn by the world, not by private companies, and 3) Decide on alternative incentives for important topics - There are kinds of engagement that likely are more aligned with user value (e.g. explicit positive reactions from diverse audiences) and we should study those as potential alternatives.
Defining Meaningful Algorithmic Transparency Standards

Today's post is wonky, but algorithmic transparency laws have passed in the EU and are in progress in the US and UK, and we need to make laws meaningfully beneficial by being specific. And wonky.

The Psychology of Technology Institute Newsletter
Following up on @Jhorwitzz's excellent article about FB's efforts to change its relationship to political content, I wrote this follow-up on what we can learn from this work. https://psychoftech.substack.com/p/when-should-companies-optimize-for
When should companies optimize for engagement?

A recent Wall St. Journal article provides more evidence that removing ambiguous engagement signals like comments and shares from civic algorithms can improve outcomes for both users and society.

The Psychology of Technology Institute Newsletter
@ernie now hopefully folks will build on the content to keep making better and better products
Great answer from
ex-coworker of mine Glenn Ellingson to “If you could wave a magic wand and fix one issue in the integrity work space, what would it be?” Answer: "convince the core product/growth team to measure themselves on value created (and lost) for the community of users, not short-term engagement. Even known-flawed measures like net promoter score (NPS) seem more likely to build lasting product value than counting likes & comments" Worth reading the full article @ https://www.everythinginmoderation.co/glenn-ellingson-meta-content-harms/
Glenn Ellingson on mitigating bad behaviour and the limitations of enforce/allow

Covering: adding friction to increase user safety and measuring on and off-platform outcomes

Everything in Moderation