@matthew_d_green A little meme about this that @mullvadnet posted and I think fits.
source: https://mastodon.online/@mullvadnet/109965262142928181
@matthew_d_green I see what they think they’re doing, but like many EU proposals it assumes that the government in question will always be a good actor.
“Certain identified apps” will start with Kik and then someone will decide it means iMessage.
@matthew_d_green Keine Sorge, denn
1. das wird nur bei ganz ganz schlimmen Menschen eingesetzt
2. eine hochspezialisierte und zuverlässige, KI-gestützte Software entscheidet, bei wem ein hohes Risiko für Grooming vorliegt.
3. Missbrauch ist ausgeschlossen, da niemand versteht, wie die KI die Auswahl trifft. Daher kann das auch keiner manipulieren.
4. Bisher wurde bei jedem Verdächtigen irgendwas gefunden. Das spricht für die Zuverlässigkeit des Systems.
/Sarkasmoff
@matthew_d_green The rough rule “Americans don’t trust the government, Europeans don’t trust companies” explains a lot of what we’re seeing. I suspect this would be illegal if offered as parental control feature for kids, but apparently not a problem if it’s just the government…
This is just beyond.
Warrants? Probable Cause? Meh, fuck that liberal nonsense, full speed ahead!
/Sarcasm
And see Ross Anderson's paper at https://arxiv.org/abs/2210.08958 Chat Control or Child Protection.
Ian Levy and Crispin Robinson's position paper "Thoughts on child safety on commodity platforms" is to be welcomed for extending the scope of the debate about the extent to which child safety concerns justify legal limits to online privacy. Their paper's context is the laws proposed in both the UK and the EU to give the authorities the power to undermine end-to-end cryptography in online communications services, with a justification of preventing and detecting of child abuse and terrorist recruitment. Both jurisdictions plan to make it easier to get service firms to take down a range of illegal material from their servers; but they also propose to mandate client-side scanning - not just for known illegal images, but for text messages indicative of sexual grooming or terrorist recruitment. In this initial response, I raise technical issues about the capabilities of the technologies the authorities propose to mandate, and a deeper strategic issue: that we should view the child safety debate from the perspective of children at risk of violence, rather than from that of the security and intelligence agencies and the firms that sell surveillance software. The debate on terrorism similarly needs to be grounded in the context in which young people are radicalised. Both political violence and violence against children tend to be politicised and as a result are often poorly policed. Effective policing, particularly of crimes embedded in wicked social problems, must be locally led and involve multiple stakeholders; the idea of using 'artificial intelligence' to replace police officers, social workers and teachers is just the sort of magical thinking that leads to bad policy. The debate must also be conducted within the boundary conditions set by human rights and privacy law, and to be pragmatic must also consider reasonable police priorities.
the line that breaks my brain is "future productivity of the children." where they let slip that probably the main reason they even claim to care whether kids get abused is that it might make them less effective serfs when they enter the workforce.
@matthew_d_green of course the kids why didn't we think about the kids
https://mullvad.net/en/blog/2023/2/1/eu-chat-control-law-will-ban-open-source-operating-systems/
@matthew_d_green Usual problem:
meanwhile the people who should be stopped will use different services and/or encryption.
This will just victimise honest people who feel the need for privacy by (inevitably) leaking their personal data to the wrong people. In fact, because it's inevitable, they will be victimised just by knowing it's inevitable. before it happens.
@matthew_d_green I am so very tired of "Think of the Children!" being used to justify and shield all manner of horrid actions.
Grooming and CSAM used against privacy and free speech.
Grooming and pedophilia used against the trans community.
Children's innocent minds versus non-conservative stuff in libraries.
Unborn lives set against bodily autonomy.
It's a litany of justifications for agendas more than anything to fix the problems.