I'm seeing some folks dismissing this as a "hit piece" that misrepresents how #Mastodon federation affects the spread of #CSAM. But I don't agree: I think this is a serious problem for the #Fediverse, and that the author proposes outlines of some solutions that are worth considering.

https://www.washingtonpost.com/politics/2023/07/24/twitter-rival-mastodon-rife-with-child-abuse-material-study-finds/

Twitter rival Mastodon rife with child-abuse material, study finds

The report raises safety questions about alternative social media sites.

The Washington Post

Yes yes, to say CSAM is "a problem on Mastodon" is like saying it's "a problem on email." That's sort of true.

Except that the way federation works, it sounds like even posts that are flagged and removed may still be stored and propagated on other instances, at least for a while. And unlike email, this material can hit your instance as a whole simply because your instance federates with the bad actor's instance, w/o any intentional directing of CSAM from one account to another (as with email).

The developers behind Mastodon and other Fediverse software really ought to consider prioritizing solutions to this problem above and beyond "just defederate." There must be tools that exist or can be developed to reduce the spread of CSAM on here. Something like a task force for designing/implementing such tools on the Fediverse really should be strongly considered.
@whitey
Yes, the Stanford Internet Observatory authored both reports (on Mastodon and on Instagram). Here's the Instagram one: https://stacks.stanford.edu/file/druid:jd797tp7663/20230606-sio-sg-csam-report.pdf
@jbe I’m adding a direct link to the research paper, which contains those thoughtful proposals. https://stacks.stanford.edu/file/druid:vb515nd6874/20230724-fediverse-csam-report.pdf
@doncruse
See also this thread from one of the authors:
https://hachyderm.io/@det/110769470058276368
David Thiel (@[email protected])

In what is hopefully my last child safety report for a while: a report on how our previous reports on CSAM issues intersect with the Fediverse. https://cyber.fsi.stanford.edu/io/news/addressing-child-exploitation-federated-social-media

Hachyderm.io
@jbe I have seen no child porn on my instance. I would guess the originating problem lies predominantly on a small percentage of servers. Surely the sources can be tracked?
@gdeihl
See the report for more. There are for sure particular regions for problematic servers (e.g. the report focuses a lot on Japan), and of course small servers can be a problem. But I think the focus is really on how the architecture of federation creates challenges in preventing CSAM spread, and possibly even challenges for stopping its spread after the originating post has been reported/removed.