🧵 1/5 If you've seen a headline about Mastodon & child abuse material (yikes!), some comments:

1. Mastodon is just software, not a company or a place, so this is like saying 'lots of child abusers use MS Office'
2. These sites appear in many blocklists and the admins of many Mastodon servers block them, so that crap never travels to your neighbourhood.
3. This is primarily in Japan.
4. Still bad, moderation and blocking are important areas to build up.

Link to article: https://www.washingtonpost.com/politics/2023/07/24/twitter-rival-mastodon-rife-with-child-abuse-material-study-finds/

Twitter rival Mastodon rife with child-abuse material, study finds

The report raises safety questions about alternative social media sites.

The Washington Post
2/ ...and here's an update from Fediverse (Mastodon's family tree) family member @pixelfed about implementing PhotoDNA blocking of child abuse materials soon: https://mastodon.social/@dansup/110770587919122415

3/ Good thread from one of the researchers with more details:

https://hachyderm.io/@det/110769470058276368

David Thiel (@[email protected])

In what is hopefully my last child safety report for a while: a report on how our previous reports on CSAM issues intersect with the Fediverse. https://cyber.fsi.stanford.edu/io/news/addressing-child-exploitation-federated-social-media

Hachyderm.io

4/ Here’s the same researcher reporting that the “minimal” block list that is normally recommended would have blocked 87% of the problem right out of the box. A great starting point.

(Scroll up to see the conversation for more context)

https://hachyderm.io/@det/110770791310886114

David Thiel (@[email protected])

@[email protected] @[email protected] That list would have blocked 87% of hits in our dataset.

Hachyderm.io

5/ And here’s a noteworthy thread on why this could be getting such sudden attention. Everyone has different incentives.

https://social.cryptography.dog/@ansuz/110771887577594307

ansuz / ऐरन (@[email protected])

maybe I've just been fighting in the crypto-wars for too long, but it seems somewhat convenient that there's a paper about CSAM on the fediverse already getting significant press coverage just as a number of mass surveillance bills thinly veiled as protecting children are being drafted. Especially since 1. the Stanford uni report doesn't disclose where their funding comes from 2. all the available options for detecting CSAM seem to be US corporations, most of which are known to be extremely friendly with the CIA and NSA 3. it's really hard to deploy technologies like this in a privacy-preserving way #CSAM #ChatControl

social.cryptography.dog
@c_9 Yep, this