Via @jdp23, the Senator behind the STOP CSAM bill, which outlaws many forms of encryption and doesn't actually stop CSAM, read that WaPo article about CSAM on the fediverse and tweeted about it:

https://twitter.com/SenatorDurbin/status/1683562063270928384

Senator Dick Durbin on Twitter

β€œβ€œWe got more photoDNA hits in a two-day period than we’ve probably had in the entire history of our organization of doing any kind of social media analysis, and it’s not even close.” We need the STOP CSAM Act. https://t.co/OQCokA71mF”

Twitter

@thisismissem @jdp23 I wrote about this before. I can go further, if you like.

This "study" is absolute garbage.

For instance, it scans around half a million posts to find 100 "potential" hits, and on sites which don't use one particular tool.

He then acts as if this faux pas is the "end of the world", even though mainstream social media is known to be objectively worse than the fediverse in sheer number of cases.

He also uses Google's algorithms which have been known to misclassify computer generated images. While that might not be to your liking, it is extremely misleading to suggest that this is that.

It is also not unlikely that some of these posts might be spammy / automation based and which hit a large number of hashtags.

Also, he cherry-picks one *particular site* (which has recently been under heavy fire from fediverse admins) when other similar sites, even with similar policies, aren't seen to be troublesome in the same way.

Also, some cherry-picked posts shown in screenshots are labelled as having been posted almost a year ago, and statistics are ever so conveniently missing on this.

Also, if he wanted to help admins with a pertinent issue, he could have reached out to them privately, rather than cherry-picking posts here and there to try to humiliate them.

Also, this very same person has previously made tweets in opposition to Facebook deploying end-to-end encryption in FB Messenger.

He also seems to want Facebook to essentially run the fediverse in the name of "saving the children", or to run every image through a Microsoft hosted service (a PRISM / NSA partner).

Problematically here, some of these services are not even based in the U.S., even if they were, services have First / Fourth Amendment rights, and the argument is in the quality of moderation / communications, not a lack of moderation.

It's not tenable to hold every service liable for a small amount of misuse, nor it is proportionate to do so, especially when someone's free expression is taken into consideration.

Also, a bad actor could just run their own dedicated service in violation of the law. If they're so determined to flout the law, they could well do so.

Also, these services are known to take actual child porn down, he admitted as much, often within hours, however, because it wasn't taken down "immediately", it becomes a "scandal".

@olives @jdp23 We are talking about the same thing right? This report? https://purl.stanford.edu/vb515nd6874

112 posts of CSAM, and 554 posts that are potentially CSAM or child sex-trafficking is too much

Even if 87% are from "alt fediverse" or "defediverse" instances, that still leaves 15 posts of CSAM and 72 of potential CSAM/child sexual abuse that are on the main fediverse that haven't been reported or left unaddressed.

On the main fediverse, any number greater than 0 is unacceptable. We must do better

Child Safety on Federated Social Media

The Fediverse, a decentralized social network with interconnected spaces that are each independently managed with unique rules and cultural norms, has seen a surge in popularity. Decentralization h...

@olives @jdp23 using Microsoft PhotoDNA, Google's SafeSearch APIs, and Thorn's service for detecting CSAM is in fact an industry standard when in comes to trust and safety on user generated content.

You might not like that they're US based or not know these tools, but we can surely work towards tools that work for the fediverse and within a privacy framework.

We don't yet have data on how quickly reports of CSAM or similar content are actioned on. Ideally we prevent publishing CSAM up front.

@olives @jdp23 Also, at the end of the day, if you want to run a small instance, and you know your members are absolutely not going to post any content that's illegal (e.g., CSAM), then you don't have to use any of those tools to scan for potentially harmful content.

But, other admins may go "yeah, I'd rather play it safe", and then employ tools to assist them in moderation.

@olives @jdp23 further, the report itself doesn't actually name and shame instances at all.

The information about "which" instances came later, because instance admins wanted to make sure they were blocking instances spreading CSAM, as to avoid that making its way onto their servers.

To me, several things are true simultaneously:

- the report called attention to a problem that Mastodon collectively hasn't paid enough attention to, and had some useful suggestions for improving moderation tools

- by eliding important details, including that the source of much of the CSAM material has been known for this since 2017, is widely defederated, and that reject_media was developed specifically in 2017 specifically to deal with this problematic instance (and does so effectively for sites that turn it on), it painted an inaccurate picture of the situation.

- focusing only on the report's shortcomings shifts attention away from real problems, including that Mastodon installations by default don't block instances that are known sources of CSAM, that Mastodon gGmbH hasn't prioritized addressing this
or improving moderation tools, and that the mobile apps and SpreadMastodon direct newcomers to a site where the moderators don't take action on clearly illegal content. Mastodon gGmbH's has a track record of not prioritizing user safety, and it's a huge problem. Hopefully the reaction to this report leads to positive changes.

- then again, the report doesn't take a "positive deviance" approach of looking at what works (tier0 blocklists, existing mechanisms like silencing and reject_media) and the possibilities for making a decentralized approach work. Instead the report concludes that centralization will be required, and suggests collaboration with Threads and others "to help bring the trust and safety benefits currently enjoyed by centralized platforms to the wider Fediverse ecosystem." But wait a second, trust and safety SUCKS for most people on Threads, why won't these supposed "benefits" lead to the same situation in the fediverse?

@thisismissem @olives

@jdp23 @olives I think I agree on that last point, in that centralisation is bad & not the correct approach, that's why we're trying to figure out how to make tech that was previously only existent in centralised spaces available to the decentralized social web.

Early approaches will almost certainly be centralised to some degree, whilst we work with partners to evolve systems towards decentralisation.

@jdp23 @olives but I will say that the fediverse has a LOT to learn from centralised social media as to trust & safety (even if we do some things differently already)

As it is, we're seeing new fediverse software being launched in a mainstream way without attention paid to even the most basic of moderation tools, and that's a huge problem.

@jdp23 @olives So far the fediverse's way of tackling CSAM & other horrendous content has been defederation, rather than prevention; we've usually been moderating in a reactive manner, rather than proactively.