"This may not be the Nuremberg trial, but we all know that the excuse of “following orders” is not an alibi when you know what you are doing. And everybody at Meta knew what they were doing. They knew they were designing systems to maximize engagement that polarized society. They knew this when they turned privacy into an exploitable variable. They knew it when the evidence mounted up on the harm Instagram was causing. They knew this when the platform became an infrastructure of propaganda, hatred and manipulation. And they knew this because many of these damages were documented, denounced and discussed inside and outside the company.

And now the same machinery is beginning to be applied inwards. Meta employees who for years helped surveil, profile, and exploit billions of users now discover that they too can be monitored, measured and turned into training data. As The New York Times notes with more than a hint of irony, Meta’s embrace of AI is making its employees miserable”.

Let’s be clear, Meta’s workforce have not had a Damascene moment: It’s something more human and more uncomfortable: the belated realization that the system they helped build had no limits, it just hadn’t come for them yet."

https://edans.medium.com/e7bb510a9127

#Meta #SocialMedia #Surveillance #Capitalism #Privacy #DataProtection

Meta’s workforce has finally realized that what goes around comes around

Some companies’ demise comes not from their inability to turn a profit, but to provide a moral justification for their existence.

Medium

Manage all NAS backups from a single dashboard—monitor jobs, track storage, and receive alerts with zero manual overhead.

Learn more: https://zurl.co/k9lan

#DataProtection #CyberSecurity #BackupSolutions #CloudComputing #ITInfrastructure #DigitalTransformation #EnterpriseIT #DataManagement

2/2
And on the Isle of Man, sixteen health-tech vendors are right now inside Manx Care infrastructure -- every one on AWS, Azure, or GCP, every one subject to US jurisdiction under the CLOUD Act -- under a governance framework that hasn't received Royal Assent yet.
Same special category data. Same missing DPIA. Same question.
haunted.lighthouse.co.im/articles/wheres-the-dpia/
#GDPR #DataProtection #NHS #Palantir #DigitalSovereignty #DPIA #CloudAct #IsleOfMan #EU #DataGovernance
GTech Booster ‹ Log In

"NHS England has granted external staff from companies including Palantir “unlimited access” to identifiable patient data while working on a part of its flagship data platform. 

The change, first outlined in an internal briefing note seen by the FT, relates to the National Data Integration Tenant, described as a “safe haven for data” before it is “pseudonymised” and transferred to other systems.

The NDIT is an area within the Federated Data Platform, a tool that connects disparate NHS data into a single system, which Palantir won a £330mn contract in 2023 to build.

Under the plan, NHS England has agreed to create an “admin” role, which the briefing acknowledges “permits unlimited access to non-NHSE staff” to the NDIT and the identifiable patient information held within it.

As well as Palantir employees, this could include staff from consultancy firms who have been drafted in to work on the FDP.

The change marks a significant departure from the current practice, which requires any individual working with the NDIT to apply for clear data access for specific data sets."

https://www.ft.com/content/8ce1b9be-1d51-466b-90de-54bff1a504ca?syn-25a6b1a6=1

#UK #NHS #Palantir #BigTech #Privacy #DataProtection

NHS to grant Palantir contractors ‘unlimited access’ to patient data

Staff from consultancy firms drafted in to work on Federated Data Platform project set to benefit

Financial Times

⚖️ Want to stay up to date on all #dataprotection and #privacy news, as well as all the latest court and DPA decisions across Europe?

📩 Become a part of the thousands of experts who are subscribed to GDPRtoday, our free weekly newsletter! 👉 https://newsletter.noyb.eu/pf/433/5gqtL

GDPRtoday

"Reuters reported that governments in the UK, France, India, Indonesia, Malaysia, Japan, and the Philippines either launched investigations, issued takedown demands, or temporarily blocked access to Grok entirely. The European Commission then escalated the matter further by opening a Digital Services Act (DSA) investigation into X, arguing the company may have failed to conduct proper risk assessments before rolling out Grok’s image-generation features in Europe.

Henna Virkkunen, Executive Vice President of the European Commission suggested X may have treated the rights of women and children as ‘collateral damage’ in its rapid deployment of AI tools. The controversy also triggered wider political debate over whether AI companies should face direct liability for foreseeable misuse of their systems, with the UK moving to criminalise certain forms of AI-generated intimate imagery and considering bans on ‘nudify’ applications altogether.

What we want to see

Generative AI is still a relatively novel phenomenon and has had limited testing against existing data protection frameworks. At times, those frameworks may need to be adaptable to remain relevant and applicable to real-world scenarios.

These frameworks play an important role in regulating AI. In countries that don’t have an AI-specific law, data protection laws are often the only legislative measure in place to constrain it. These investigations into GrokAI will be an important test of whether they can effectively constrain it and guard against the harm posed by generative AI.

We hope they are up to the task."

https://privacyinternational.org/news-analysis/5770/collateral-damage-grok-ai-and-human-cost-generative-ai

#GenerativeAI #AI #Grok #DataProtection #Privacy #AIRegulation #EU

Collateral Damage: Grok AI and the Human Cost of Generative AI

The Grok AI EU scandal began in January 2026 after users discovered that the xAI chatbot, Grok, could generate non-consensual sexualised images of real people — including women, celebrities, politicians, and reportedly minors — using ordinary photos posted online.

Privacy International
More and more websites want proof you’re human. Blame the bots | The-14

Websites increasingly ask users to prove they are human as AI-powered bots grow smarter, faster and harder for online systems to detect.

The-14 Pictures
ICYMI: ChatGPT sued over secret data transfers to Meta and Google: A class action filed May 13 alleges ChatGPT secretly transmitted users' conversation topics and personal identifiers to Meta and Google without their consent. https://ppc.land/chatgpt-sued-over-secret-data-transfers-to-meta-and-google/ #ChatGPT #PrivacyConcerns #DataProtection #LegalAction #Meta
ChatGPT sued over secret data transfers to Meta and Google

A class action filed May 13 alleges ChatGPT secretly transmitted users' conversation topics and personal identifiers to Meta and Google without their consent.

PPC Land