New preprint of my final PhD project ✨ Effects of Preemptive Empathy Interventions on Reply Toxicity among Highly Active Social Media Users https://osf.io/preprints/socarxiv/evdxy/
Within a preregistered survey experiment I test various interventions to reduce reply toxicity in political online discussions by inducing empathy and perspective taking. I test three different changes in the user interface (nudges) and a more educative intervention (boost).
I recruited a sample of 2,154 highly active social media users via Facebook Ads in the United States and Germany. After measuring participants’ political attitudes on climate change, migration and feminist issues (abortion in the US, gender-neutral language in Germany), I confronted them with social media statements they likely disagree with and asked them to write a comment in reply.

Before commenting, the empathy and perspective taking interventions were presented to the treatment groups. I have two control groups, including one that receives a simple friction placebo.

The aim of the boost was to target participants’ motivation to engage in empathy and perspective taking to better manage conflicting political views and keep a constructive (non toxic) discussion going.

However, participants were also asked to first assess whether they perceive the trigger statement as generally legitimate and to otherwise, apply the 'do-not-feed-the-troll' heuristic @stworg @samwineburg
Overall, even among the sample of frequent commenters, reply toxicity was highest for people with especially high online activity - more evidence for selection into toxic behaviour on social media. @M_B_Petersen @andyguess @brendannyhan
Compared to the control group, neither the empathy or perspective taking nudges nor the friction placebo reduced reply toxicity. Boosting decreased reply toxicity to some degree but the effect was not robust against the inclusion of covariates. More details in the preprint!
also interesting: toxicity differences in replies to different topics
also interesting 2: language differences in Google’s Perspective API? + DeepL absorbing toxicity?
interesting / random 3: favourite and most offensive emojis selected by participants in distraction task. - Is this the first “useful” application for wordclouds? 🤔

Back to bottom line - In this sample, reducing toxicity online does not seem to work via simple changes in the user interface (nudging). However, boosting appears more promising to preemptively reduce toxicity before important voices are forced out of the public discourse online!

Thanks for tons of helpful suggestions by @simonmunzert @lorenz_spreen @seramirezruiz and many more! Further feedback very welcome!