Tarleton Gillespie

@tarleton
49 Followers
176 Following
46 Posts
I'm an independent-minded academic, critical of the tech industry, working for Microsoft. Perplexing. My latest book is Custodians of the Internet (Yale, 2018)
Don't look now!! It's the next wave of SMC interns at MSR, studying all things sociotechnical! https://socialmediacollective.org/2023/03/24/meet-the-2023-smc-sociotechnical-systems-phd-interns/
Meet the 2023 SMC Sociotechnical Systems PhD Interns!

Social Media Collective
Our postdoc candidates were truly extraordinary. We are grateful to all who applied. I can only echo @ZoeGlatt and @chchliu that the market this year is awful and if you haven’t landed your dream spot, it is not your fault.
If this is something you'd like to read, please do. "The Fact of Content Moderation; Or, Let’s Not Solve the Platforms’ Problems for Them" Media and Communication, forthcoming. https://www.cogitatiopress.com/mediaandcommunication/article/view/6610
The Fact of Content Moderation; Or, Let’s Not Solve the Platforms’ Problems for Them | Commentary | Media and Communication

Tarleton Gillespie

@natematias @rabble sorry to hear about that nonsense! If you didn’t click that buy button yet, don’t do it! here’s the free PDF: https://bit.ly/CustodiansOfTheInternet . My sense is that the small academic publishers simply can’t afford to deal with global distribution, in the face of the Amazonian giants.

We are hiring a predoc to work with @tarleton @maryLgray @zephoria and me in Cambridge MA, starting in July.

EDIT: SEARCH IS CLOSED

https://socialmediacollective.org/2023/02/23/stspredoc/

UPDATE: LAST CALL, Friends!! The SMC position for a full-time Pre-doctoral Research Assistant closes Monday, April 3, 5pm EDT!

Social Media Collective
@cyberlyra I guess I want a policy that (a) allows that a distinction btw hosting and boosting is actually hard to parse, (b) can distinguish between moderating imperfectly vs profoundly looking the other way while still benefitting [we could lean on the "good faith" part of 230 more] and (c) imagines some obligations for platforms that are more about aggregate harm+value - i.e. what do we do when the platform is doing what it should, and its still has deleterious effects?
@[email protected] Totally. I didn't notice whether filtering for terrorist content came up in the Gonzalez discussion -- my understanding was that the complaint could not hinge on removal, because the 230 case law is pretty settled, so they had to focus on recommendation. which to me is implicitly a case where filtering was not used, or was not successful. But I didn't listen to every bit of the back and forth yesterday.
@[email protected] Certainly already exists, yes. I guess I'd want to distinguish btw detection algorithms and recommendation algorithms. The Gonzales case is objecting to YouTube taking it upon itself to suggest videos, which may be ISIS videos, which may be harmful. Being more like a publisher than ever. So its whether recommendation puts YouTube beyond what 230 indemnifies. Detection algos seem different, part of the mechanisms platforms can use as part of good faith content moderation.
@danfaltesek I like how you are thinking of them as a gradient, and I agree that we can decide where to draw a line, past which there should be some responsibility for the provider when content is harmful. I guess I don't want it to be in the Court's read of 230, because I'd prefer a different conversation altogether, about aggregate harms to the public rather than individual harms to a user. Feels to me like that can't be framed as a 230 update.
Platform providers have asked us to accept a little error, as the cost of getting what we want, while they capitalize on our data and our attention to ads. This may not be a bargain we should have accepted, and it’s one we can reject it if we want. Or, we could use it to justify new obligations for these platforms: new expectations, public standards, and incentives for innovations in recommendation and moderation that improve the quality of public discourse. [22/22]