@matthew_d_green A little meme about this that @mullvadnet posted and I think fits.
source: https://mastodon.online/@mullvadnet/109965262142928181
@matthew_d_green I see what they think they’re doing, but like many EU proposals it assumes that the government in question will always be a good actor.
“Certain identified apps” will start with Kik and then someone will decide it means iMessage.
@matthew_d_green Keine Sorge, denn
1. das wird nur bei ganz ganz schlimmen Menschen eingesetzt
2. eine hochspezialisierte und zuverlässige, KI-gestützte Software entscheidet, bei wem ein hohes Risiko für Grooming vorliegt.
3. Missbrauch ist ausgeschlossen, da niemand versteht, wie die KI die Auswahl trifft. Daher kann das auch keiner manipulieren.
4. Bisher wurde bei jedem Verdächtigen irgendwas gefunden. Das spricht für die Zuverlässigkeit des Systems.
/Sarkasmoff
@matthew_d_green The rough rule “Americans don’t trust the government, Europeans don’t trust companies” explains a lot of what we’re seeing. I suspect this would be illegal if offered as parental control feature for kids, but apparently not a problem if it’s just the government…
This is just beyond.
Warrants? Probable Cause? Meh, fuck that liberal nonsense, full speed ahead!
/Sarcasm
And see Ross Anderson's paper at https://arxiv.org/abs/2210.08958 Chat Control or Child Protection.
Ian Levy and Crispin Robinson's position paper "Thoughts on child safety on commodity platforms" is to be welcomed for extending the scope of the debate about the extent to which child safety concerns justify legal limits to online privacy. Their paper's context is the laws proposed in both the UK and the EU to give the authorities the power to undermine end-to-end cryptography in online communications services, with a justification of preventing and detecting of child abuse and terrorist recruitment. Both jurisdictions plan to make it easier to get service firms to take down a range of illegal material from their servers; but they also propose to mandate client-side scanning - not just for known illegal images, but for text messages indicative of sexual grooming or terrorist recruitment. In this initial response, I raise technical issues about the capabilities of the technologies the authorities propose to mandate, and a deeper strategic issue: that we should view the child safety debate from the perspective of children at risk of violence, rather than from that of the security and intelligence agencies and the firms that sell surveillance software. The debate on terrorism similarly needs to be grounded in the context in which young people are radicalised. Both political violence and violence against children tend to be politicised and as a result are often poorly policed. Effective policing, particularly of crimes embedded in wicked social problems, must be locally led and involve multiple stakeholders; the idea of using 'artificial intelligence' to replace police officers, social workers and teachers is just the sort of magical thinking that leads to bad policy. The debate must also be conducted within the boundary conditions set by human rights and privacy law, and to be pragmatic must also consider reasonable police priorities.