Grateful to everyone who's engaging thoughtfully and proactively with the Stanford team's important report on how to manage CSAM across the fediverse.

And to folks who are responding with a defense reflex, I get it, having seen Big Tech respond the same way to research & news coverage over the years.

I think a mature fediverse is possible—one that takes problems and unfavorable media seriously, channeling frustrations toward addressing problems rather than lashing out at inconvenient news.

As the fediverse grows in influence you can expect growth in research & news coverage. That's a good sign.

Many investigations will be high quality, good faith work. Some of it will be blatantly false. And a lot of it will feel personal & be hard to distinguish for people who care deeply.

We've seen the same thing happen to Wikipedia. Some in the community will complain, some will lash out, saying reporters don't understand. Others will roll up their sleeves. My tip: look for the helpers.

Thiel & DiResta have posted some really great ideas for possible ways to manage CSAM. I especially love how they think beyond legal compliance to also consider moderator mental health, including:

- subscribable blocklists
- hashing
- interface changes to the moderation interface
- an attestation chain feature for ActivityPub that would enable instances to confirm that images were scanned/moderated elsewhere

https://stacks.stanford.edu/file/druid:vb515nd6874/20230724-fediverse-csam-report.pdf

The Stanford team's study also offers one principled approach (of many possible) to studying high-risk topics that benefit the fediverse as a whole, while minimizing risks to individual communities.

- they didn't archive anything
- they used special software for minimizing leak of sensitive material
- they didn't query user profiles
- they worked so carefully to avoid false positives, they probably undercounted, statistically-speaking
- they followed the law around missing & exploited children

@natematias super interesting, we did some work with @dajb last year on tangential topics ( https://bonfirenetworks.org/zappa/ ) which paved the way to our current boundaries implementation ( https://bonfirenetworks.org/posts/how_to_boundaries/ )
Bonfire

A federated social network for individuals and communities to design, operate and control their own digital lives.

@natematias I just hope @Gargron and the rest of the mastodon developers take heed
@natematias bucketing of people into one of several types of reaction comes across as odd and a bit invalidating. perhaps we're seeing two different sets of people reacting? folks simultaneously venting frustrations and participating in efforts to make this a better place seem to be fairly common around here, as well as a pretty normal, and not inherently contradictory, human response to the situation
@natematias Have you seen a breakdown by instance?

@MeanestBossEver there's a good discussion thread at https://hachyderm.io/@det/110770791310886114 ... 87% are on tier0 blocklists (apparently mostly one large Japanese instance). Not sure it's in the thread but apparently most of the rest was on the flagship instance.

Great thread @natematias ... one additional bit of context for the WaPo article is that there's an attempt to sneak the STOP CSAM bill (which wouldn't actually stop CSAM) through this week -- @eff has more at https://www.eff.org/deeplinks/2023/07/ndaa-no-place-sweeping-internet-legislation-stop-csam-act

David Thiel (@[email protected])

@[email protected] @[email protected] That list would have blocked 87% of hits in our dataset.

Hachyderm.io

@jdp23 thanks for sharing this extra context!

A law of tech policy: there's always at least one looming ineffective, invasive digital child safety bill at any given time. If researchers waited for quiet periods, they would never publish.

@natematias indeed. This time there are so many #BadInternetBills that EFF's got a page on them, with a half-dozen action items. Sigh.

https://www.eff.org/deeplinks/2023/07/you-can-help-stop-these-bad-internet-bills

You Can Help Stop These Bad Internet Bills

Red alert! For the last six months, EFF, our supporters, and dozens of other groups have been sounding the alarm about several #BadInternetBills that have been put forward in Congress. We’ve made it clear that these bills are terrible ideas, but Congress is now considering packaging them together—possibly into must-pass legislation.

Electronic Frontier Foundation

@natematias
I would 'engage thoughtfully' if I'd SEEN any CSAM on Mastodon.

But I haven't. So I'm not being reflexively defensive—I'm refusing to respond to hearsay fears.

@natematias

thoughtfulness would involve treating the fedi like the rest of the open internet, not a platform. if there's CSAM on any website, go after it. apply the same technology we use to clean up other networks on the web. this coverage applies equally to the internet in general, but no one would think of writing such a headline. cause using a browser doesn't substantially increase your odds of stumbling on this shit. neither does joining mastodon. it's misleading at best.

@natematias The Stanford team is entirely bad faith and should not be engaged with but exposed as frauds. It's the exact same guy who's been on Twitter promoting backdoors and mandatory scanning of people's personal devices for years. For whatever reason he has an agenda.

@natematias I'm currently doing my part to give a good report that further explains the paper, the author's thoughts, and important context about what the network is and how key parts of it operates.

I hope to also highlight nascent efforts in development to better protect against CSAM on the fediverse.