"Reuters reported that governments in the UK, France, India, Indonesia, Malaysia, Japan, and the Philippines either launched investigations, issued takedown demands, or temporarily blocked access to Grok entirely. The European Commission then escalated the matter further by opening a Digital Services Act (DSA) investigation into X, arguing the company may have failed to conduct proper risk assessments before rolling out Grok’s image-generation features in Europe.
Henna Virkkunen, Executive Vice President of the European Commission suggested X may have treated the rights of women and children as ‘collateral damage’ in its rapid deployment of AI tools. The controversy also triggered wider political debate over whether AI companies should face direct liability for foreseeable misuse of their systems, with the UK moving to criminalise certain forms of AI-generated intimate imagery and considering bans on ‘nudify’ applications altogether.
What we want to see
Generative AI is still a relatively novel phenomenon and has had limited testing against existing data protection frameworks. At times, those frameworks may need to be adaptable to remain relevant and applicable to real-world scenarios.
These frameworks play an important role in regulating AI. In countries that don’t have an AI-specific law, data protection laws are often the only legislative measure in place to constrain it. These investigations into GrokAI will be an important test of whether they can effectively constrain it and guard against the harm posed by generative AI.
We hope they are up to the task."
#GenerativeAI #AI #Grok #DataProtection #Privacy #AIRegulation #EU
Collateral Damage: Grok AI and the Human Cost of Generative AI
The Grok AI EU scandal began in January 2026 after users discovered that the xAI chatbot, Grok, could generate non-consensual sexualised images of real people — including women, celebrities, politicians, and reportedly minors — using ordinary photos posted online.







