This is bad. Mastodon instances need tooling to automatically detect and ban abuse (or flag for manual review), and to automatically defederate from instances that don’t ban abuse https://www.washingtonpost.com/politics/2023/07/24/twitter-rival-mastodon-rife-with-child-abuse-material-study-finds/
Twitter rival Mastodon rife with child-abuse material, study finds

The report raises safety questions about alternative social media sites.

The Washington Post

@micahflee this is bad yes, but it's not something that can be solved at the mastodon level I believe

It's must be solved at the protocol, activityPub level, since it's an issue that happens to other services of the fediverse such as Pixelfed or PeerTube

@micahflee also, not all countries are making the same efforts on the same things

Case in point : Japan and drawings/fictional characters (the loli problem)

Everyone knows, and as long as laws and regulation don't change, well this won't change.

Betting that this is the main part of CSAM issues for Mastodon instances...