🧵 1/5 If you've seen a headline about Mastodon & child abuse material (yikes!), some comments:

1. Mastodon is just software, not a company or a place, so this is like saying 'lots of child abusers use MS Office'
2. These sites appear in many blocklists and the admins of many Mastodon servers block them, so that crap never travels to your neighbourhood.
3. This is primarily in Japan.
4. Still bad, moderation and blocking are important areas to build up.

Link to article: https://www.washingtonpost.com/politics/2023/07/24/twitter-rival-mastodon-rife-with-child-abuse-material-study-finds/

Twitter rival Mastodon rife with child-abuse material, study finds

The report raises safety questions about alternative social media sites.

The Washington Post
2/ ...and here's an update from Fediverse (Mastodon's family tree) family member @pixelfed about implementing PhotoDNA blocking of child abuse materials soon: https://mastodon.social/@dansup/110770587919122415

3/ Good thread from one of the researchers with more details:

https://hachyderm.io/@det/110769470058276368

David Thiel (@[email protected])

In what is hopefully my last child safety report for a while: a report on how our previous reports on CSAM issues intersect with the Fediverse. https://cyber.fsi.stanford.edu/io/news/addressing-child-exploitation-federated-social-media

Hachyderm.io

4/ Here’s the same researcher reporting that the “minimal” block list that is normally recommended would have blocked 87% of the problem right out of the box. A great starting point.

(Scroll up to see the conversation for more context)

https://hachyderm.io/@det/110770791310886114

David Thiel (@[email protected])

@[email protected] @[email protected] That list would have blocked 87% of hits in our dataset.

Hachyderm.io

5/ And here’s a noteworthy thread on why this could be getting such sudden attention. Everyone has different incentives.

https://social.cryptography.dog/@ansuz/110771887577594307

ansuz / ऐरन (@[email protected])

maybe I've just been fighting in the crypto-wars for too long, but it seems somewhat convenient that there's a paper about CSAM on the fediverse already getting significant press coverage just as a number of mass surveillance bills thinly veiled as protecting children are being drafted. Especially since 1. the Stanford uni report doesn't disclose where their funding comes from 2. all the available options for detecting CSAM seem to be US corporations, most of which are known to be extremely friendly with the CIA and NSA 3. it's really hard to deploy technologies like this in a privacy-preserving way #CSAM #ChatControl

social.cryptography.dog
@c_9 I mean, this was to be expected in federated social media. I never see anything like that on here though
@c_9 Funny how for somewhere that is rife with child abuse material, I’ve never seen anything.

@c_9

Many people are saying the owner of BirdX knows where a bunch of it is!

I mean, it ain't me!! Many, many people are saying it.

@c_9
Lazy click-bait
#journalism They tout it as a report but no information about methodology and detail of observations.
@Daily_Twerk Yup, but the actual report is linked in my third post in the thread, from one of the researchers describing it in detail: https://cyber.fsi.stanford.edu/io/news/addressing-child-exploitation-federated-social-media
Addressing Child Exploitation on Federated Social Media

@c_9
Thanks for that.

That is a little different to the news article.
Certainly more thoughtful.

If someone creates an instance for such stuff but doesn't federate then there's little anyone could do.

In 8 months I've been on Fediverse I've never seen such content and have no desire to go looking for it or playing amateur detective.

Does raise an issue about how such content can be found and removed or blocked if federated.
Assuming an admin is not complicit, should the admin contact local law enforcement?

My gut feeling is yes.

Be interesting to know if is on rader of admins. They may not say anything out loud in case it alerts these people to what is going on.

@c_9

“Investment in one or more centralized clearinghouses for performing content scanning (as well as investment in moderation tooling) would be beneficial to the Fediverse as a whole,” Thiel and co-author Renée DiResta wrote referring to the so-called federated universe of platforms.

AFAICT, one of the authors of that study has also done their own disinfo campaigns:

https://en.wikipedia.org/wiki/Project_Birmingham_(disinformation_campaign)

https://en.wikipedia.org/wiki/Yonder_(company)

https://en.wikipedia.org/wiki/Ren%C3%A9e_DiResta

Project Birmingham (disinformation campaign) - Wikipedia

@c_9 Anyone wanna place bets on whether this story was planted by some scummy backdoors org?
@dalias @c_9 Have we reached Gandhi-con 3 already?
ansuz / ऐरन (@[email protected])

maybe I've just been fighting in the crypto-wars for too long, but it seems somewhat convenient that there's a paper about CSAM on the fediverse already getting significant press coverage just as a number of mass surveillance bills thinly veiled as protecting children are being drafted. Especially since 1. the Stanford uni report doesn't disclose where their funding comes from 2. all the available options for detecting CSAM seem to be US corporations, most of which are known to be extremely friendly with the CIA and NSA 3. it's really hard to deploy technologies like this in a privacy-preserving way #CSAM #ChatControl

social.cryptography.dog
@c_9 Is blocking enough? There should be vigorous criminal investigation and prosecution, both on the supply and demand sides.
@acb In the context of a piece of software, the software should improve its blocking abilities. Mastodon can’t do the criminal investigation part, but of course crimes should be investigated I agree.
@c_9 Okay, now do Gab, Gettr and Truth Social...
@toxtethogrady Items 1-4 do not apply to any of those, so there's nothing to do, even if I did accept being handed homework. 🙂

@c_9 I looked at this study and what struck me was the majority of their examples followed a formula, the description then links to secure messenger accounts.

The uniformity of format makes me wonder how many "sellers" there are out there. Any number greater than zero is terrible of course, but it's easy to (intentionally? This is Stanford after all) make the results look scarier by not saying how many accounts they found/etc.

@c_9 @Setok Just another article to stop people trying out the platform.
@Albertkinng @c_9 very similar to the types of shock horror articles that went around during the dawn of the Internet. But yes, by the same argument you could write a piece with the title "Mastodon is full of ring-wing Trumpers" (footnote, Truth Social is built on Mastodon). Of course you will find anything on an open and opensource platform. Whether that reflects the experience of the majority is another matter.
@c_9 Since I can't read Japanese, it's not a language my instance even lets onto my timeline. So I guess I skip all the creepy/illegal material out of Japan as an unintended consequence.
@c_9 I strongly suspect that, given that this is demonstrably untrue, there are some dark reasons behind this ridiculous article. #FollowTheMoney 🤔
@c_9 The other thing is -- even if there was some central authority in Mastodon which could prevent this use case, there would be other software that could be used. I mean, encrypted email lists would also permit users to do this. Do we ban email?
@c_9 @Nonya_Bidniss CSAM is rife within #Twitter, #Facebook, #Instagram & #YouTube as well. These reports are silly as bad guys will use any software to break the law.

@c_9 Somehow federation is a pointed out as a problem, yet if people want to do something obviously criminal, they're probably not wanting open federation anyway.

This is a tool for public microblogging, it's not suited for secret dealings.

I read the original source, and the problem is very real.

Federation is a problem because most instances will keep storing stuff from other instances, even when the originating instance has deleted it.

The fediverse lacks a proper immune system, able to detect and scrub this sort of materials.

@c_9 I mean, this was to be expected in federated social media. I never see anything like that on here though