In what is hopefully my last child safety report for a while: a report on how our previous reports on CSAM issues intersect with the Fediverse.

https://cyber.fsi.stanford.edu/io/news/addressing-child-exploitation-federated-social-media

Addressing Child Exploitation on Federated Social Media

Similar to how we analyzed Twitter in our self-generated CSAM report, we did a brief analysis of public timelines of prominent servers, processing media with PhotoDNA and SafeSearch. The results were legitimately jaw-dropping: our first pDNA alerts started rolling in within minutes. The true scale of the problem is much larger, as inferred by cross-referencing CSAM-related hashtags with SafeSearch level 5 nudity matches.
Hits were primarily on a not-to-be-named Japanese instance, but a secondary test to see how far they propagated did show them getting federated to other servers. A number of matches were also detected in posts originating from the big mainstream servers. Some of the posts that triggered matches were removed eventually, but the origin servers did not seem to consistently send "delete" events when that happened, which I hope doesn't mean the other servers just continued to store it.
The Japanese server problem is often thought to mean "lolicon" or CG-CSAM, but it appears that servers that allow computer-generated imagery of kids also attracts users posting and trading "IRL" materials (their words, clear from post and match metadata), as well as grooming and swapping of CSAM chat group identifiers. This is not altogether surprising, but it is another knock against the excuses of lolicon apologists.
Traditionally the solution here has been to defederate from freezepeach servers and...well, all of Japan. This is commonly framed as a feature and not a bug, but it's a blunt instrument and it allows the damage to continue. With the right tooling, it might be possible to get the large Japanese servers to at least crack down on material that's illegal there (which non-generated/illustrated CSAM is).
I have argued for a while that the Fediverse is way behind in this area; part of this lack of tooling and reliance on user reports, but part is architectural. CSAM-scanning systems work one of two ways: hosted like PhotoDNA, or privately distributed hash databases. The former is a problem because all servers hitting PhotoDNA at once for the same images doesn't scale. The latter is a problem because widely distributed hash databases allow for crafting evasions or collisions.
I think for this particular issue to be resolved, a couple things need to happen: one, an ActivityPub implementation of content scanning attestation should be developed, allowing the origin servers to perform scanning via a remote service and other servers to verify it happened. Second, for the hash databases that are privately distributed (e.g. Take It Down, NCMEC's NCII database), someone should probably take on making these into a hosted service.
By the way, now that we have big players like Meta entering the Fediverse, it would be great if they could sponsor some development on child safety tooling for Mastodon and other large ActivityPub implementations, as well as work with an outside organization to make a hosted hash database clearinghouse for the Fediverse. It would be quite cheap for them, and would make the ecosystem as a whole a lot nicer. /thread

@det
I think that along with helping to fund the servers that are hosting our accounts that perhaps we need an organization or two to step forward and to become safety "police" for the #Fediverse that we can fund through donations perhaps.

I hope this issue is addressed soon.

IFTAS

IFTAS is a non-profit organization created to help like minded Fediverse members foster and preserve inclusive, civil discourse for the common good.

Mastodon hosted on mastodon.iftas.org
IFTAS

IFTAS is a non-profit organization created to help like minded Fediverse members foster and preserve inclusive, civil discourse for the common good.

Mastodon hosted on mastodon.iftas.org
@det no. That's definitely a camel nose

I would never trust Meta to create or maintain the tooling for something as important and necessary as policing CSAM. It's an appalling shame that the ActivityPub specification did not account for moderation tools or CSAM blockers, but Meta would never give you those tools for free. They would rather use it as leverage to bend the entire ActivityPub spec to their whim, playing out the "Extend" phase of Embrace, Extend, Extinguish.

It's a Faustian bargain. Those tools need to be developed, absolutely, but by the open source community, not a profit driven amoral company.

@det I don't think Meta can be trusted to do anything positive...