In March, a right-wing pundit calling for "transgenderism" to be "eradicated" because they were part of "the left’s attempt to erase biological women from modern society.”

His words did not fit the platform's definition of hate speech. But in my latest for New York Times Opinion, I argue his words were more dangerous: fear-inducing speech that can stoke violence.

https://www.nytimes.com/2023/05/06/opinion/fear-speech-social-media.html?unlocked_article_code=KlL5WsqboqsdMgt1HYWCHHii6JQcYFoiggteB-Est-XpbW7XU5cTFpFFoUVHn-L6TBZvqdlXS5jGLXRtT4k8THU2z6FEyHQRdBJdCQXA6jWY1gnHgCJSbiH_lSuq2xe5ztvvDfJg8tqXNObYAXvZA7xTMMGimjCsREEv-SQiqwQOYPJh3V3IFRSv4h7mJyoYTTdt4-9frXLQQNJVbLl_KR9_gbXks3tqTTakIzwrH4MsZnlMZEh5N9-TFk3DNTu4qPjcvShfOYQjurj7XoSSLDQO6f30kc1NtlHEK1n98yhVfdxSWhCGayu2lseEICnN9Opbf_WQ1OxgaQAx3hn1jNY&smid=url-share

Opinion | Social Media Companies Need to Address Speech That Incites Fear

Fear-inciting speech, which isn’t banned by the big tech platforms, is far more dangerous than hate speech.

The New York Times

Hate speech gets all the attention. But fear is what leaders use to inspire violence. Fear that the election has been stolen. Fear that trans movement is erasing women. Fear that children are being groomed by pedophiles. This is dangerous speech.

Susan Benesch of the Dangerous Speech Project says the key feature of it is that it persuades “people to perceive other members of a group as a terrible threat. That makes violence seem acceptable, necessary or even virtuous.”
https://dangerousspeech.org/

Home

Dangerous Speech Project

Fear speech is hard for automated systems to identify because it doesn't always rely on slurs and derogatory words that are in hate speech, Rutgers University professor Kiran Garimella found in his first large-scale quantitative study of fear speech. https://t.co/8fOXlmk8D5

In his second study of fear speech, Garimella found that it prompts more engagement on social media than hate speech, and users who post fear speech garner more followers. https://t.co/jRRmsuR5Yb

One innovative approach to quashing the popularity of fear speech comes in a new paper from former Facebook engineer Ravi Iyer, Jonathan Stray and Helena Puig Larrauri. They say platforms can reduce 'destructive conflict' by relying less on 'engagement' metrics that boost posts with high numbers of comments, shares or time spent.

Instead, they argue, platforms could boost posts that users explicitly indicate they found valuable. https://t.co/rB97KHcLUu

@Julia love these findings! Good to have qualitative support for what was already my intuition:

https://blog.erlend.sh/sense-making-in-federated-discourse

“We can do a lot better than 'posts per month' as our metric of success.

..optimize for an increasingly higher ratio of boosts/favorites per post; implies a culture of uplifting and listening, as opposed to incessant chatter.

Going beyond that, how about we look for ways to measure 'collabs', 'mutual connections' or 'ideas' per/m.

Quality over quantity, dear fedizens.”

Sense-making in federated discourse

In Feed Overload I made a brief case for some content gardening tools I'm missing in my fediverse experience. Evergreen content garden...

Open Indie

@erlend @Julia

My feeling is that we, the users, are the algorithm and we don't really need any algorithm to interpret it.

Our boosting of posts is our way of marking something as interesting or important. Isn't that, ideally, what an algorithm should do?

Favouriting shouldn't be used at all. ⭐ = OK.
OK, read it - OK, good point - OK, thanks.

We get followers based on what we post. We post interesting/important stuff - we get boosted, Those who see it follow to see more. Easy.