Sarah T. Roberts

@ubiquity75
25 Followers
386 Following
393 Posts

Professor, researcher, writer, teacher. I care about content moderation, tech, digital labor, the state of the world. I like animals and synthesizers and games. On the internet since 1993. Los Angeles/Tovangaar-based. Gay lady.

I wrote an entire book on content moderation called Behind the Screen. Now might be an interesting time to read it.

Interestingly, they don’t always have seniority but many are men. Like Elon’s goon squad, which is also all men — save for the nanny for little Aexebodyspray127625-2.

One former colleague who was “spared” told me they are one of their managers 75 direct reports. Lol. What.

Another thing I’ve been meaning to share: every single researcher was fired save one. There is no one left who does research of any kind to inform any actions the company takes. Worth noting that those spared are usually one of xxx people, and so they are headless. Effectively fired but being kept on due to WARN Act matters.
They are called “agents” at Twitter. There are at least four third-party companies who sourced and managed them. As with most companies in the space, Twitter’s official FTE numbers of around 7500 employees did NOT count the contractors. This is where ALL OF THESE FIRMS stash their moderators. Facebook has easily has 20-25,000 of them at any given time. Twitter had at least 3,000.
Around 3,000+ contractor employees of Twitter were canned last night (totally normal thing to do, btw). How does Twitter have so many contractors? This is where the CONTENT MODERATOR numbers are hidden. From an ex-colleague I’ll not name: “of the 3,000+ contractors let go last night, I believe that it included a SIGNIFICANT portion of the content moderation workforce.”

I don’t know if I actually shared anything but the drawing, so here’s the interview from Harvard Business Review on content moderation and related things.

https://hbr.org/2022/11/content-moderation-is-terrible-by-design

Content Moderation Is Terrible by Design

Social media companies couldn’t exist in their current form without content moderation. But while these jobs are essential, they’re often low-paid, emotionally taxing, and extremely stressful — they require exposure to horrific violence, disturbing sexual content, and generally the worst of what we see (or don’t see) online. Do they have to be? Sarah T. Roberts, faculty director of the Center for Critical Internet Inquiry and associate professor of gender studies, information studies, and labor studies at UCLA, details the evolution of this work, from patchwork approaches to in-house moderators and contractors to the current prevailing model, where generalist contractors work in call center–like offices. There are steps companies could take to improve this work, including providing better technology for moderators as well as better pay and more psychological support. But improvement, at present, is more likely to come from worker organizing and collective demand for better conditions than from the firms that employ the workers or the companies that need the moderation.

Harvard Business Review
I have a bunch of buddies I want to drag onto Mastodon. How hard do I lean on ‘em?

Heh. After all my complaining about CW’s, I finally saw an excellent use for it.

I was reading a post by someone Black about something racist (it wasn’t behind a cw.) But in the responses , one was behind the CW cut, labeled “White Person’s opinion.”

I scrolled straight past that shit and was ecstatic I didn’t have to engage it. 🤣

THATS a good use of the CW. 🤣🤣🤣🤣

As new members arrive -- particularly my colleagues in the academic community -- may I remind you that the fediverse does not look kindly upon data scraping without getting the consent of instance admins and members.

I wrote a critique of a recent (very, very sloppy) study that not only scraped without consent -- it also ran people's posts through Google:

https://fossacademic.tech/2022/10/18/notesOnNobreEtAl.html

As the AOIR ethics guides note: expectations of privacy are heavily contextual.

#aoir #researchEthics

More Mastodon Scraping Without Consent (Notes on Nobre et al 2022)

There’s a new paper out about Mastodon! But unfortunately, it’s a deeply problematic one. Nobre et al’s “More of the Same? A Study of Images Shared on Mastodon’s Federated Timeline” is a paper that is now published in proceedings from International Conference on Social Informatics. (Unfortunately, it’s not open access.) Because I’m currently researching the fediverse and blogging about that process, I thought I’d write up notes on this paper. Why this paper? Frankly, because I’m pretty certain it violates the community norms, as well as terms of service, of many Mastodon instances. It instantly reminded me of the controversial paper from Zignani et al, “Mastodon Content Warnings: Inappropriate Contents on a Microblogging Platform”, which resulted in a scathing open letter and the retraction of a dataset from the Harvard Dataverse. Nobre et al’s “More of the Same” is a study of image-sharing. The authors claim that it is about image-sharing on Mastodon, but really their focus is on images they culled from Mastodon.social’s federated timeline. They pulled 4M posts from 103K active users, of which 1M had images. Since they pulled posts from Mastodon.social’s federated timeline, they saw posts from 4K separate instances. The authors state that a “relevant number” of the images they found are “explicit.” They categorize the images as such after running them through Google’s Vision AI Safe Search system. They also run the images they find through Google’s image search to trace where the images came from and how they are shared on Mastodon. Ultimately, the authors don’t really make an argument, other than stating in passing that Mastodon needs better moderation, since people share explicit images. In some ways, “More of the Same” lives up to its title: it’s more of the same poor scholarship that can be seen in Zignani et al (in fact, Nobre et al cite that controversial paper). Here are my critiques:

FOSS Academic

". . . when characters are convincing, we might attend to them less, or attend less to what makes them a character. We might learn more from or about character when a character poses a problem. Put simply: when someone becomes a problem, we tend to question their character. We might be concerned more with what is behind an action, when this action is not one we are behind" (233).

Sara Ahmed, "Willful Parts: Problem Characters or the Problem of Character." *New Literary History,* Volume 42, Number 2, Spring 2011, pp. 231-253.

To my fellow Mastodon owners who need a giggle, enjoy this “I do not want a Mastodon” one page RPG. https://www.patreon.com/posts/i-do-not-want-74555460