Tarleton Gillespie

@tarleton
49 Followers
176 Following
46 Posts
I'm an independent-minded academic, critical of the tech industry, working for Microsoft. Perplexing. My latest book is Custodians of the Internet (Yale, 2018)
Don't look now!! It's the next wave of SMC interns at MSR, studying all things sociotechnical! https://socialmediacollective.org/2023/03/24/meet-the-2023-smc-sociotechnical-systems-phd-interns/
Meet the 2023 SMC Sociotechnical Systems PhD Interns!

Social Media Collective
Our postdoc candidates were truly extraordinary. We are grateful to all who applied. I can only echo @ZoeGlatt and @chchliu that the market this year is awful and if you haven’t landed your dream spot, it is not your fault.
If this is something you'd like to read, please do. "The Fact of Content Moderation; Or, Let’s Not Solve the Platforms’ Problems for Them" Media and Communication, forthcoming. https://www.cogitatiopress.com/mediaandcommunication/article/view/6610
The Fact of Content Moderation; Or, Let’s Not Solve the Platforms’ Problems for Them | Commentary | Media and Communication

Tarleton Gillespie

We are hiring a predoc to work with @tarleton @maryLgray @zephoria and me in Cambridge MA, starting in July.

EDIT: SEARCH IS CLOSED

https://socialmediacollective.org/2023/02/23/stspredoc/

UPDATE: LAST CALL, Friends!! The SMC position for a full-time Pre-doctoral Research Assistant closes Monday, April 3, 5pm EDT!

Social Media Collective
Platform providers have asked us to accept a little error, as the cost of getting what we want, while they capitalize on our data and our attention to ads. This may not be a bargain we should have accepted, and it’s one we can reject it if we want. Or, we could use it to justify new obligations for these platforms: new expectations, public standards, and incentives for innovations in recommendation and moderation that improve the quality of public discourse. [22/22]
If content moderation is imperfect, then what gets recommended will also occasionally include the reprehensible, the harmful, or the illegal. Even if they were applied consistently, we do not agree on the standards; and people are ingenious when it comes to testing and eluding these governance mechanisms. [21/22]
The part that’s new, perhaps, is that we also have to figure out our societal tolerance for error. Content moderation, even when performed in good faith, can never be perfectly executed. At this scale, even sophisticated detection software removes some content it shouldn’t, and overlooks some that it should remove; we ask too many people to do too difficult a job with too little support, and as such the standards will invariably be applied inconsistently. [20/22]
This is something we never solved with traditional media, but our efforts involved setting specific obligations about education, about children, about balance, about incentives towards quality programming, etc. This may sound antiquated, but it is a problem we have always faced, and we face again with social media.  [19/22]
Neither of these outcomes are particularly helpful if what we’re actually trying to address is the aggregate harms of information that we’re not willing to simply prohibit. Instead of hoping to do so by extending or curtailing 230, we need to look back to a well-worn, century-long discussion: how to get a media ecosystem, largely or entirely driven by market imperatives, to also serve the public interest? [18/22]
If Congress makes it so that Section 230 no longer protects recommendation, platforms are very likely to remove way more content altogether, as well as more drastically reducing what they’re willing recommend - which they do already. Not recommending content that is otherwise there to be seen is exactly what conservatives rail against, mistakenly called “shadowbanning” - but it is the only logical response from platforms if the Court finds for the plaintiffs in this case. [17/22]