Una investigación de Amnistía Internacional y #AlgorithmicTransparency Institute y AI Forensics, demostró que esta situación podría estar dándose en este mismo momento en muchos hogares. Los datos no son hipotéticos, sino los resultados expuestos en el informe Empujados a la oscuridad: El feed “Para ti” de TikTok fomenta la autolesión y la ideación suicida, que denuncia cómo los algoritmos de recomendación de esta red pueden estar destrozando la salud mental de mucha personas menores de edad
Investigación de Amnistía Internacional, #AlgorithmicTransparency Institute y #AIForensics demostró que esta situación podría darse en este mismo momento en muchos hogares. Los datos no son hipotéticos, sino los resultados expuestos en el informe Empujados a la oscuridad: El feed “Para ti” de TikTok fomenta la autolesión y la ideación suicida, que denuncia cómo los algoritmos de recomendación de esta red social pueden estar destrozando la salud mental de muchas personas menores de edad

Technē without safety guardrails?

* "The public showdown between the Department of Defense and Anthropic began earlier this week after they entered into discussions about the military’s use of the company’s Claude AI system. But the talks broke down as both sides appeared to be unable to come to agreement over safety guardrails."

"US defense officials have pushed for unfettered access to Claude’s capabilities that they say can help protect the country, while Anthropic has resisted allowing its product to be used for mass surveillance or autonomous weapons systems that can kill people without human input." >>
https://www.theguardian.com/us-news/2026/feb/27/trump-anthropic-ai-federal-agencies

* The Pentagon strongarmed AI firms before Iran strikes – in dark news for the future of ‘ethical AI’ >>
https://theconversation.com/the-pentagon-strongarmed-ai-firms-before-iran-strikes-in-dark-news-for-the-future-of-ethical-ai-277198

* Who decides when a machine kills? When private companies are enforcing ethical constraints and governments are not, something is very wrong >>
https://www.euractiv.com/opinion/who-decides-when-a-machine-kills/

#ethics #OpenAI #BigTech #surveillance #AutonomousWeapons #ADM #war #KillerRobots #LAWs #Google #LLMs #Claude #Anthropic #transparency #accountability #AutomatedDecisionMaking #algorithms #AlgorithmicTransparency

Trump orders US agencies to stop use of Anthropic technology amid dispute over ethics of AI

Hours after exclusion of Anthropic, OpenAI announces fresh Pentagon deal, but says it will maintain same safety guardrails at the heart of the dispute

The Guardian
In addition, you can read this article providing an overview of French case law on the right to information regarding partially automated administrative decisions, written by an interdisciplinary team of researchers consisting of Luc Pellissier, Maxime Zimmer, Noé Wagener, and Philippine Ducros. #algorithmictransparency https://revuedlf.com/droit-administratif/lineffectivite-du-droit-dacces-a-linformation-sur-les-algorithmes-une-etude-empirique/
» L’ineffectivité du droit d’accès à l’information sur les algorithmes : une étude empirique | Revue des droits et libertés fondamentaux

Just spent about an hour scrolling Mastodon. (Crazy Saturday night shenanigans). 😴

Clicked on a few things. Read an article. All good. 🤓

And I'm not angry at anything. ☺

Why aren't more media outlets here, I wonder? 🤔

#mastodon #algorithmictransparency #auspol

Germany tests algorithmic transparency through landmark enforcement cases: Four German legal actions against X, TikTok, Amazon, and Meta probe platform algorithms under DSA, GDPR, and AI Act, establishing precedents for democratic accountability. https://ppc.land/germany-tests-algorithmic-transparency-through-landmark-enforcement-cases/ #AlgorithmicTransparency #DigitalAccountability #GDPR #AIAct #DataProtection
Germany tests algorithmic transparency through landmark enforcement cases

Four German legal actions against X, TikTok, Amazon, and Meta probe platform algorithms under DSA, GDPR, and AI Act, establishing precedents for democratic accountability.

PPC Land

“How Algorithms Steer Your Feed Without Asking.”

Dr. Eslami digs into how algorithmic systems decide which creators, pages, and posts you see — even when you didn’t choose them.
We discuss transparency, attention capture, and how to stay aware of hidden influence.

🎧 Listen to the full episode: https://youtu.be/xTgzG04hyXI

#AlgorithmicTransparency #TechEthics #DigitalSovereignty #Podcast #theinternetiscrack

Many fights for #AlgorithmicTransparency focused on accessing code, but there’s more: Who builds these systems? Who are the providers? Are they independently evaluated? That’s why #AlgorithmRegisters are a key demand for accountability. So far, only France, Finland, the UK, Norway and Germany have official ones.

This week, #AutomatedSociety examines Europe's transparency laws; and how they fall short. Subscribe to our newsletter now: https://automatedsociety.algorithmwatch.org/#/en/

Automated Society - AlgorithmWatch

Get the briefing on how automated systems impact real people, in Europe and beyond.

AlgorithmWatch

Ah, algorithmic transparency, what I told the European Parliament was the next step they should take after GDPR. Better six years late than never. Hey, maybe in another decade they’ll even consider implementing a General Data Minimisation Regulation (GDMR).

https://ar.al/2019/11/29/the-future-of-internet-regulation-at-the-european-parliament/

https://ar.al/2018/11/29/gdmr-this-one-simple-regulation-could-end-surveillance-capitalism-in-the-eu/

#regulation #EU #GDPR #GDMR #algorithmicTransparency #dataMinimisation https://mamot.fr/@davduf/113845060326024724

The Future of Internet Regulation at the European Parliament

A brief write-up of my talk at the EU Parliament last week with embedded videos of my talk and a link to my slides.

Aral Balkan