The EU’s “chat control” legislation is the most alarming proposal I’ve ever read. Taken in context, it is essentially a design for the most powerful text and image-based mass surveillance system the free world has ever seen.
This legislation, which is initially targeted at child abuse applications, creates the infrastructure to build in mandatory automated scanning tools that will search for *known* media, *unknown* media matching certain descriptions, and textual conversations.
The legislation is vague about how this will be accomplished, but the “impact assessment” it cites is not. The assessment makes clear that mandatory scanning of images & text, especially in encrypted data, is the only solution the Commission will consider.
The calls for detecting “grooming behavior”. If you wonder what that means, here is a brief description. Roughly it means developing new AI tools that can understand the content of textual conversations and can automatically report you to the police based on them.
You might ask how the EU, famous for its focus on privacy, justifies the development of automated text-analysis tools that scan your private chats. The Impact Assessment has an analysis. To say that this analysis is deficient is really much too kind.
As a technologist I have to point out that the technological solutions to do this *safely* don’t exist. They are at best at the research stage. ML textual analysis schemes do exist, and often misfire. These systems will need to accomplish this task perfectly and also privately.
The idea that we can deploy AI systems to read your private conversations and report crimes is frankly dystopian. Even if such systems existed, no reasonable democracy would vote for this. But this is what the EU is proposing to mandate and *build* in the next couple of years.
If you take comfort from the fact that these systems are aimed at “awful crimes” or “will be fully transparent”, please don’t. The nature of these proposals is that they will be easy to reprogram, either by law or by technical accident.
@matthew_d_green
So basically they're building a "Pre-Crime Unit"
@grey_ghost @matthew_d_green With one of their precogs being Tay, presumably.
@matthew_d_green if this were implemented and enforced in my country i would probably just stop talking to minors online at all and i don't know what else, i'm sure that wouldn't protect me from every sort of accusation
@matthew_d_green Even if the worst parts of the proposal can still be blocked by memberstates: the proposal also tries to establish mandatory age verification for the web, that is, slipping in ID/passport verification through the backdoor. This part is easily overlooked in analyses and needs to get more attention in public debates. @khaleesicodes
@matthew_d_green @againsthimself we are living in an era of precisely unreasonable democracies.
@matthew_d_green The EU knew what Jeffrey Epstein was doing and they turned a blind eye. There are so many glaring ways they ignore children in need of protection. I’m so sick of politicians using children as an excuse to take away rights
@matthew_d_green "automated detection tools have acquired a high degree of accuracy" I'm sorry what fucking planet are they on?
@matthew_d_green what's citation 206? because [citation needed]
@gsuberland they are asking the companies who sell that stuff how good their tools are on the metrics of those companies. @matthew_d_green

@matthew_d_green I see what they think they’re doing, but like many EU proposals it assumes that the government in question will always be a good actor.

“Certain identified apps” will start with Kik and then someone will decide it means iMessage.

@matthew_d_green Frankly my dear, I don't give a damn.
@matthew_d_green sure but the EU is also famous for the worlds worst copyright law also mandating the scanning of everything and the only upshot of that particular catastrophe is its so awful they're terrified to actually implement it lest they get dragged in front of a human rights tribunal. But that probably won't stop them from putting this fresh new hell on the books anyway.

@matthew_d_green Keine Sorge, denn

1. das wird nur bei ganz ganz schlimmen Menschen eingesetzt
2. eine hochspezialisierte und zuverlässige, KI-gestützte Software entscheidet, bei wem ein hohes Risiko für Grooming vorliegt.
3. Missbrauch ist ausgeschlossen, da niemand versteht, wie die KI die Auswahl trifft. Daher kann das auch keiner manipulieren.
4. Bisher wurde bei jedem Verdächtigen irgendwas gefunden. Das spricht für die Zuverlässigkeit des Systems.

/Sarkasmoff

#KI #privacy

@matthew_d_green Oh, einen hab ich noch:
Deine Privatsphäre ist geschützt, weil kein Mensch Zugriff auf dein digitales Leben erhält, solange Du dich an alle bekannten und unbekannten Regeln hälst. Und Du hast ja sicherlich auch nichts zu verbergen, oder?

@matthew_d_green The rough rule “Americans don’t trust the government, Europeans don’t trust companies” explains a lot of what we’re seeing. I suspect this would be illegal if offered as parental control feature for kids, but apparently not a problem if it’s just the government…

This is just beyond.

@matthew_d_green companies can either follow this law or the GDPR. But not both at the same time. Is the EU allowed to create contradicting regulations?
@matthew_d_green oh so nothing too tricky then, just a completely unachievable level of sentiment analysis
@gsuberland @matthew_d_green also analysis of a community/social context to see if behavior is exploitative or not. totally easy and not prone to random issues!

@matthew_d_green

Warrants? Probable Cause? Meh, fuck that liberal nonsense, full speed ahead!

/Sarcasm

@matthew_d_green we can’t even build this WTF.

@matthew_d_green

And see Ross Anderson's paper at https://arxiv.org/abs/2210.08958 Chat Control or Child Protection.

Chat Control or Child Protection?

Ian Levy and Crispin Robinson's position paper "Thoughts on child safety on commodity platforms" is to be welcomed for extending the scope of the debate about the extent to which child safety concerns justify legal limits to online privacy. Their paper's context is the laws proposed in both the UK and the EU to give the authorities the power to undermine end-to-end cryptography in online communications services, with a justification of preventing and detecting of child abuse and terrorist recruitment. Both jurisdictions plan to make it easier to get service firms to take down a range of illegal material from their servers; but they also propose to mandate client-side scanning - not just for known illegal images, but for text messages indicative of sexual grooming or terrorist recruitment. In this initial response, I raise technical issues about the capabilities of the technologies the authorities propose to mandate, and a deeper strategic issue: that we should view the child safety debate from the perspective of children at risk of violence, rather than from that of the security and intelligence agencies and the firms that sell surveillance software. The debate on terrorism similarly needs to be grounded in the context in which young people are radicalised. Both political violence and violence against children tend to be politicised and as a result are often poorly policed. Effective policing, particularly of crimes embedded in wicked social problems, must be locally led and involve multiple stakeholders; the idea of using 'artificial intelligence' to replace police officers, social workers and teachers is just the sort of magical thinking that leads to bad policy. The debate must also be conducted within the boundary conditions set by human rights and privacy law, and to be pragmatic must also consider reasonable police priorities.

arXiv.org