Following up on @Jhorwitzz's excellent article about FB's efforts to change its relationship to political content, I wrote this follow-up on what we can learn from this work. https://psychoftech.substack.com/p/when-should-companies-optimize-for
When should companies optimize for engagement?

A recent Wall St. Journal article provides more evidence that removing ambiguous engagement signals like comments and shares from civic algorithms can improve outcomes for both users and society.

The Psychology of Technology Institute Newsletter
To build on this work showing how deprecating engagement incentives for sensitive topics improves outcomes for users and society, I'm hopeful we can 1) Audit algorithms across platforms for perverse engagement incentives (see this post for more on how: https://psychoftech.substack.com/p/defining-meaningful-algorithmic-transparency), 2) Align on a societal definition of sensitive content - This work relies on a common definition of what content we feel should or should not be optimized for engagement. Ideally, these lines are drawn by the world, not by private companies, and 3) Decide on alternative incentives for important topics - There are kinds of engagement that likely are more aligned with user value (e.g. explicit positive reactions from diverse audiences) and we should study those as potential alternatives.
Defining Meaningful Algorithmic Transparency Standards

Today's post is wonky, but algorithmic transparency laws have passed in the EU and are in progress in the US and UK, and we need to make laws meaningfully beneficial by being specific. And wonky.

The Psychology of Technology Institute Newsletter