In March, a right-wing pundit calling for "transgenderism" to be "eradicated" because they were part of "the left’s attempt to erase biological women from modern society.”

His words did not fit the platform's definition of hate speech. But in my latest for New York Times Opinion, I argue his words were more dangerous: fear-inducing speech that can stoke violence.

https://www.nytimes.com/2023/05/06/opinion/fear-speech-social-media.html?unlocked_article_code=KlL5WsqboqsdMgt1HYWCHHii6JQcYFoiggteB-Est-XpbW7XU5cTFpFFoUVHn-L6TBZvqdlXS5jGLXRtT4k8THU2z6FEyHQRdBJdCQXA6jWY1gnHgCJSbiH_lSuq2xe5ztvvDfJg8tqXNObYAXvZA7xTMMGimjCsREEv-SQiqwQOYPJh3V3IFRSv4h7mJyoYTTdt4-9frXLQQNJVbLl_KR9_gbXks3tqTTakIzwrH4MsZnlMZEh5N9-TFk3DNTu4qPjcvShfOYQjurj7XoSSLDQO6f30kc1NtlHEK1n98yhVfdxSWhCGayu2lseEICnN9Opbf_WQ1OxgaQAx3hn1jNY&smid=url-share

Opinion | Social Media Companies Need to Address Speech That Incites Fear

Fear-inciting speech, which isn’t banned by the big tech platforms, is far more dangerous than hate speech.

The New York Times

Hate speech gets all the attention. But fear is what leaders use to inspire violence. Fear that the election has been stolen. Fear that trans movement is erasing women. Fear that children are being groomed by pedophiles. This is dangerous speech.

Susan Benesch of the Dangerous Speech Project says the key feature of it is that it persuades “people to perceive other members of a group as a terrible threat. That makes violence seem acceptable, necessary or even virtuous.”
https://dangerousspeech.org/

Home

Dangerous Speech Project

Fear speech is hard for automated systems to identify because it doesn't always rely on slurs and derogatory words that are in hate speech, Rutgers University professor Kiran Garimella found in his first large-scale quantitative study of fear speech. https://t.co/8fOXlmk8D5

In his second study of fear speech, Garimella found that it prompts more engagement on social media than hate speech, and users who post fear speech garner more followers. https://t.co/jRRmsuR5Yb

One innovative approach to quashing the popularity of fear speech comes in a new paper from former Facebook engineer Ravi Iyer, Jonathan Stray and Helena Puig Larrauri. They say platforms can reduce 'destructive conflict' by relying less on 'engagement' metrics that boost posts with high numbers of comments, shares or time spent.

Instead, they argue, platforms could boost posts that users explicitly indicate they found valuable. https://t.co/rB97KHcLUu

Facebook has just announced a change in how it promotes political content. In a blog post the company said it is “continuing to move away from ranking based on engagement” and instead for users was giving more weight to “learning what is informative, worth their time or meaningful.” https://t.co/DC6vomOoRK
Reducing Political Content in News Feed | Meta

We're working to understand people's preferences for political content and find a better balance of content in News Feed.

Meta

But in the end, the algorithms alone aren't going to save us. We, the users of the platforms, also have a role to play in challenging dangerous speech by calling out fear-based incitement through what is called "counterspeech." https://dangerousspeech.org/counterspeech/

Fighting fear is not going to be easy. But it is possibly the most important work we can do to prevent online outrage from begetting real-life violence.

Counterspeech

Dangerous Speech Project

@Julia

profit-making interests can stay the heck out of the business of deciding what is 'meaningful'

@Julia love these findings! Good to have qualitative support for what was already my intuition:

https://blog.erlend.sh/sense-making-in-federated-discourse

“We can do a lot better than 'posts per month' as our metric of success.

..optimize for an increasingly higher ratio of boosts/favorites per post; implies a culture of uplifting and listening, as opposed to incessant chatter.

Going beyond that, how about we look for ways to measure 'collabs', 'mutual connections' or 'ideas' per/m.

Quality over quantity, dear fedizens.”

Sense-making in federated discourse

In Feed Overload I made a brief case for some content gardening tools I'm missing in my fediverse experience. Evergreen content garden...

Open Indie

@erlend @Julia

My feeling is that we, the users, are the algorithm and we don't really need any algorithm to interpret it.

Our boosting of posts is our way of marking something as interesting or important. Isn't that, ideally, what an algorithm should do?

Favouriting shouldn't be used at all. ⭐ = OK.
OK, read it - OK, good point - OK, thanks.

We get followers based on what we post. We post interesting/important stuff - we get boosted, Those who see it follow to see more. Easy.

@Julia

We can absolutely not trust Facebook. They have proven that too many times and have used up their 2nd, 3rd and 100th chances.

I like the Fediverse. Where we, the people, boost what we think is good or important - or just too funny not to boost.

@Julia

Thank you!

Oxford English definition of "eradicate":
Destroy completely; put an end to.

So the video called to kill off all transgender(istic) "things" - speech, literature, entertainment AND PEOPLE!

How can anyone not see that and do something about it? You documented very well how things often go based on such speech! I add Jan 6th to the list.

"Erase biological women from modern society"?
I would have laughed at the absurdity, if I didn't know people would fall for it. 😭