This is bad. Mastodon instances need tooling to automatically detect and ban abuse (or flag for manual review), and to automatically defederate from instances that don’t ban abuse https://www.washingtonpost.com/politics/2023/07/24/twitter-rival-mastodon-rife-with-child-abuse-material-study-finds/
Twitter rival Mastodon rife with child-abuse material, study finds

The report raises safety questions about alternative social media sites.

The Washington Post

@micahflee the report correctly noted that these problematic instances aren't part of the main Fediverse that we are on: "in the case of child safety, Japan has significantly more lax laws related to CSAM which has resulted in a cultural divide where most users in Japan are segregated from the rest of the Fediverse"

I'm sure there are some instances that haven't properly blocked these instances yet, which is a problem! But PhotoDNA is far from a magic bullet, c.f. https://www.nytimes.com/2022/08/21/technology/google-surveillance-toddler-photo.html

A Dad Took Photos of His Naked Toddler for the Doctor. Google Flagged Him as a Criminal.

Google has an automated tool to detect abusive images of children. But the system can get it wrong, and the consequences are serious.

The New York Times