There's A LOT of discussion about content moderation right now and very little of it touches on the fact that we've all lived on the big social sites for the last decade-plus thanks to the massively exploitated labor of mostly-invisible moderation workers. The social web at scale wouldn't have happened without these laborers, who in addition to shit wages, have been exposed to literally every imaginable horror.

If we're remaking this world, let's do better on that front.

Anyway, as you hear about fuckups on moderation and poor decisions by a handful of volunteer mods on servers that have grown by hundreds of thousands of users in a week, keep that in mind.

Finally, the very best coverage of this issue has been from @caseynewton at The Verge whose very haunting stories have stuck with me for years now.

They are worth reading and really thinking about what *waves hands in all directions* really means in terms of the actual human lives at the bottom of all this.

https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona

https://www.theverge.com/2019/6/19/18681845/facebook-moderator-interviews-video-trauma-ptsd-cognizant-tampa

The secret lives of Facebook moderators in America

In a damning new report, Casey Newton gives an unprecedented look at the day-to-day lives of Facebook moderators in America. His interviews with twelve current and former employees of Cognizant in Arizona reveal a workplace perpetually teetering on the brink of chaos.

The Verge
@dansinker @caseynewton The seminal work in this area is of course @ubiquity75's work Behind the Screen.
@dansinker @caseynewton I also remember the first longer piece I have ever read about it, by Adrian Chen: https://www.wired.com/2014/10/content-moderation/
@dansinker @caseynewton god, the things I saw as a _product designer_ at a risk analysis company will haunt me forever. I can't even imagine what these folks are put through.

@dansinker @caseynewton That was disturbing... great articles.

There must be a better solution than more policing. But, I guess that's age-old: how does a society enforce its morality norms?

This platform, besides being distributed, by design seeks to remove profit motive. Does that help? It's not enough, certainly.

And is there a name for this social phenomenon of knowing what's behind the scenes? The old "you don't want to watch the sausage being made" situations?

Lots to think about.

@billrehm removing the profit motive certainly removes some exploitative incentives from the equation, but I think that there's a still a very real unanswered question about how moderation scales and how we don't grind the people doing it into dust along the way.
@dansinker @billrehm it's not always expedient or conclusive but I continue to believe the moderation system developed and operated by the community of Wikipedia moderator/contributors stands alone on sustainability. I have proposed that these individuals should form a professional guild to license and accredit their services and processes to other social web platforms.
@pinsk @dansinker @billrehm it grinds people down…

@georgewherbert @pinsk @dansinker @billrehm

@molly0xfff, this feels like a “scaling volunteers” problem similar to Wikipedia, which has it’s faults but has been overall amazingly successful at creating and maintaining the largest store of human knowledge in history—all with volunteers.

Do you think that’s right, and do you think that a model like that is likely to emerge in the Fediverse?

@pinsk @dansinker @billrehm

Wikipedia as an organization culture is pretty damn white, male, abled and US centric, though. For example, a stub I wrote on the European Nuclear Safety Regulators Group (ENSREG) was voted "not notable enough" and deleted in the 2010s. Yes I am still salty, no I haven't edited Wikipedia since, and ENSREG still doesn't have an English Wikipedia article.

Not notable???
https://www.ensreg.eu/members-glance

ENSREG at a glance | ENSREG

ENSREG is an independent expert advisory group, in which all EU Member States are represented by senior officials from their national regulatory authorities or nuclear safety authorities. Senior representatives of the European Commission are also part of the group. In addition, Council of the European Union, Switzerland, Norway, and the International Atomic Energy Agency, have

@pinsk @dansinker @billrehm
Love this open problem discussion re how to scale moderation without grinding people doing it into dust along the way.
I love Wikipedia & have done my bits in fixing, updating, and creating entries over the years.
I like Mastodon having zero profit motives (like Wikipedia) so moderation by volunteers have to be manageable & don't take too much of our time.
We are building this plane as we fly it. Will see how it goes. Thx4 getting me to think about this open problem.
@dansinker @caseynewton this actually makes me wonder if I really should open up my own instance to others.
Besides to suddenly have to moderate content on your server, owners and operators of instances probably also become liable for content.
Maybe the smartest thing (albeit expensive as per cost per user) would be to stay on your private instance or join a well moderated one (with all risks on the owner).
We’re probably about to become mini Twitters 🤔🤔🤔

@sven

How about an option in between? Allow up to 10–20 well-trusted people on your server, and then block any further registrations. It's small enough that DMs or email threads should be efficient for any needed moderation, especially once you get to know each other enough.

@dansinker @caseynewton

@boud @dansinker @caseynewton This would be probably a good recipe to get an instance started, ie. with a reasonably sized group of people whom you already know or who got invited by people who share similar values with you.
@dansinker @sven @boud @caseynewton yeah I would say if you're wondering whether you should open your instance up to a wider group of people, the answer is probably no
@sven I have for sure been thinking about making an instance of one since things have gotten a lot busier over here.
@sven @dansinker @caseynewton As for liability, do we not think 230 would apply? (I'm assuming you're in U.S. If not, please disregard).
Arrgh (he/him/his) @[email protected] 🐀 on Twitter

“@jproulx The legal and technical barriers to T&S adequacy in a federated social universe are already insurmountable, now let's throw in the ethical one where you need an army of humans to manually review horrific shit that the algorithms missed. It's a PTSD toxic waste plant.”

Twitter
@caseynewton @dansinker I feel bad I never thought about this.
@dansinker I appreciate that Eugen himself addressed the two incidents publicly and they were resolved. More transparency and maturity by the 29-year-old than any of the previous social site owners I've known.
@molotovcockatiel I mean I don't even know which two incidences "the two incidents" would be in this context. There are a lot more than two incidents.
@dansinker There were two major incidents (yes, incidents) here to which I thought you were referring. Those were the reasons I saw for all the "discussion about content moderation" on here in the past few days. I guess you were referring to something else. Apologies.
@dansinker and I think overall a good thing? moving moderation from being business decisions made by under-appreciated labor to social decisions made by volunteers ("community leaders"?) — hopefully moves more to the types of drama you get in forums/subreddits/community mailing lists
@albertsun yeah I'm not sure how that will scale on instances with hundreds of thousands or millions of users
@dansinker yea true! it probably wouldn't - which maybe is a sign that instances shouldn't grow to that size? i signed up for mastodon.xyz and tbh know nothing about it and feel uneasy with that - contemplating rolling my own or rolling one with friends
@albertsun @dansinker You might want to reach out to your admin to find out more. Because many of these instances are small, they have the bandwidth to engage with individual users. Mine has been very approachable.

@dansinker I was thinking about this the other day. It seems like anything that's words (micro-to-whole-number aggressions) might be amenable to community policing and consensus rules, but the nightmare fuel images and legally toxic stuff (e.g., CSAM), that seems like professionals must be paid to handle it, and in some cases, to involve law enforcement.

Technical categories, like porn, nudes, profanity, some copyright, volunteers could do that, too.

@dansinker
Some of the recent moderation kerfuffles might seem overdone, but as Deputy Fife said:

https://www.youtube.com/watch?v=de_P2aUZJyA

Barney Fife - Nip It

YouTube
@dansinker @arthurwyatt THHHHIIIIISSSSS. Moderation is hard. Moderation is work. We really need to reflect on whose exploitation our comfort and safety has relied on. Building new systems will be painful, but hopefully worth it.

@dansinker in other words, we should all make some financial contribution to our federated infrastructure providers.

I'd be happy to pay $8/month as long as I don't get stigmatized by a blue checkmark or some stupid shit like that.

Having a healthy community without sponsored content is good enough for me.

@dansinker Indeed! @ubiquity75 wrote the book on this and is here. Looking forward to her continued insights.
@josh @dansinker Thanks, Josh. Yes, I’ve been studying content moderation _as work_, and the workers who do it, for almost 13 years. Here is my book on the subject. You may wish to check it out. https://yalebooks.yale.edu/9780300261479/behind-the-screen
Behind the Screen

An eye-opening look at the invisible workers who protect us from seeing humanity’s worst on today’s commercial internet   Social media on the internet c...

Yale University Press
@ubiquity75 @josh @dansinker I just left Teleperformance, a third party who does this kind of moderation work (biggest call center company in the world), and while I did tech support, I used to live with colleagues who did TikTok and Pinterest moderation work.
@lapingvino @josh @dansinker It’s a pretty miserable job. I hope they’re okay.
@ubiquity75 @josh @dansinker It's in Europe and they only get a segment of everything, with psychological help available etc, so from what I heard they are pretty okay. Worker rights are pretty essential to avoid the biggest misery xD.
@lapingvino @josh @dansinker Indeed. Although not all European moderation jobs have those conditions.
@ubiquity75 @josh @dansinker I have had the opportunity to compare working conditions between TP and other similar companies around here, also other languages etc, and there are definitely huge differences. Teleperformance is a company that has issues but it is founded on ethical principles. I didn't see anything close to that with other companies around here, and the other companies are cheaper for the clients, so...
@josh @dansinker @ubiquity75 Are there any good estimates for the maximum safe user-to-moderator ratio? I kinda suspect that an instance with 100,000 users can’t possibly employ enough moderators to do a good job. Maybe social media as currently implemented (flat networks with everyone broadcasting worldwide) is just inherently bad.
@mathew @josh @dansinker I’m leaning toward your last statement.
@ubiquity75 @josh @dansinker It’s an opinion I’ve been developing for quite a while now. There’s Dunbar’s number, and there’s the problem of context collapse. Combine those, and add in a structure that rewards pile-ons, and you have a factory for manufacturing abuse. And that’s before you even introduce algorithmic bias to the timeline.
@mathew @ubiquity75 @dansinker There was an executive once on the other site who said something along the lines of, “All the academics and opinion writers nattering on about content moderation don’t understand the intractable complexities of billon-scale platforms.” And my immediate thought was, “Maybe the problem isn’t the academics, but billon-scale platforms, then?”
@josh @mathew @dansinker We understand the problems all too well. The problem is: their business model. Cry me a river.
@dansinker Moderation should be community-driven.
@div sure, but how does that work if a community is like ten million people?
@div @dansinker It is already bad enough for moderators of large social medias being exposed to something traumatizing. If social media stays in such a mass direction where stuff like this is not only possible but potential to happen at least a handful of times, that’s exposing a whole community of people who did not even go through any potential psychological training to triggering content. Especially awful for those with bad mental health in the first place.

If you don't already, you should follow @dansinker

He lives two weeks in the future, and following him has helped me understand when I get there.

@dansinker And like a lot of low-paid, outsourced care-work labor, this is often gendered labor. It's a feminist issue as well as a tech issue.
@dansinker can you imagine the toxic stew they had to look at?
@dansinker so how do we do better? I mean, the not exploiting people part is obvious. But how do we have thoughtful and (at least somewhat) consistent moderation in a federated space?
I’m not sure it is possible to be fully consistent and timely, but I think we can aim for consistency as individuals and as a local community.

From July 2011,

@anildash's wise counsel :

"You should make a budget that supports having a good community,

... Every single person who’s going to object to these ideas is going to talk about how they can’t afford to hire a community manager,

Or how it’s so expensive to develop good tools for managing comments.

Okay,

Then save money by turning off your web server.

Or enjoy your city where you presumably don’t want to pay for police because they’re so expensive."

https://anildash.com/2011/07/20/if_your_websites_full_of_assholes_its_your_fault-2/

If your website's full of assholes, it's your fault - Anil Dash

A blog about making culture. Since 1999.

@HiMYSYeD ok but not paying for cops is a pretty great idea it turns out.