Official statement from @Mer__edith: the new EU chat controls proposal for mass scanning is the same old surveillance with new branding.

Whether you call it a backdoor, a front door, or “upload moderation” it undermines encryption & creates significant vulnerabilities.

https://signal.org/blog/pdfs/upload-moderation.pdf

(1/?)

@signalapp
> the new EU chat controls proposal for mass scanning is the same old surveillance with new branding

I don't disagree with anything @Mer__edith says in the linked blog post, but I think it misses the point. Yes, lack of technical knowledge among regulators makes the problem worse. But it's not the root cause of why the CSAM-scanning zombie refuses to die.

(2/?)

Let's zoom out a bit and look at the various motives for wanting "upload moderation". They include;

* genuine concerns for children harmed in the production of CSAM

* law enforcement who see E2EE as a limit on legitimate search powers

* the techlash, prompted by legitimate concern about the dodgy practices of digital technology companies

* knee-jerk pearl-clutching ("think of the children") by both conservatives on the hard right and crypto-conservative "liberals" on the centre-right

(3/?)

Most importantly IMHO, are the corporate DataFarmers. For whom robust privacy protection is bad for business, but compulsory automated message scanning is a whole new market. Both for their "cloud" services, and lucrative partnerships with spy agencies, both state and corporate (eg Palantir). As well as being a new source of MOLE food, especially useful if trained MOLEs ("AI") are judged to be derivative works under copyright law.

#MOLE: Machine Operated Learning Emulator

(4/4)

As my activist mentors always say, follow the money. The magic zombie reanimation fluid is funding from the DataFarmers, for anyone willing to run reputation laundering campaigns in their interests. In this case, reframing mass privacy violation as a moral good, so they can recruit groups with legitimate concerns as Useful Idiots.

To protect E2EE effectively, we need to engage with those legitimate concerns. Including in the messaging we put out against stuff like "upload moderation".