Investigación de Amnistía Internacional, #AlgorithmicTransparency Institute y #AIForensics demostró que esta situación podría darse en este mismo momento en muchos hogares. Los datos no son hipotéticos, sino los resultados expuestos en el informe Empujados a la oscuridad: El feed “Para ti” de TikTok fomenta la autolesión y la ideación suicida, que denuncia cómo los algoritmos de recomendación de esta red social pueden estar destrozando la salud mental de muchas personas menores de edad

Technē without safety guardrails?

* "The public showdown between the Department of Defense and Anthropic began earlier this week after they entered into discussions about the military’s use of the company’s Claude AI system. But the talks broke down as both sides appeared to be unable to come to agreement over safety guardrails."

"US defense officials have pushed for unfettered access to Claude’s capabilities that they say can help protect the country, while Anthropic has resisted allowing its product to be used for mass surveillance or autonomous weapons systems that can kill people without human input." >>
https://www.theguardian.com/us-news/2026/feb/27/trump-anthropic-ai-federal-agencies

* The Pentagon strongarmed AI firms before Iran strikes – in dark news for the future of ‘ethical AI’ >>
https://theconversation.com/the-pentagon-strongarmed-ai-firms-before-iran-strikes-in-dark-news-for-the-future-of-ethical-ai-277198

* Who decides when a machine kills? When private companies are enforcing ethical constraints and governments are not, something is very wrong >>
https://www.euractiv.com/opinion/who-decides-when-a-machine-kills/

#ethics #OpenAI #BigTech #surveillance #AutonomousWeapons #ADM #war #KillerRobots #LAWs #Google #LLMs #Claude #Anthropic #transparency #accountability #AutomatedDecisionMaking #algorithms #AlgorithmicTransparency

Trump orders US agencies to stop use of Anthropic technology amid dispute over ethics of AI

Hours after exclusion of Anthropic, OpenAI announces fresh Pentagon deal, but says it will maintain same safety guardrails at the heart of the dispute

The Guardian
In addition, you can read this article providing an overview of French case law on the right to information regarding partially automated administrative decisions, written by an interdisciplinary team of researchers consisting of Luc Pellissier, Maxime Zimmer, Noé Wagener, and Philippine Ducros. #algorithmictransparency https://revuedlf.com/droit-administratif/lineffectivite-du-droit-dacces-a-linformation-sur-les-algorithmes-une-etude-empirique/
» L’ineffectivité du droit d’accès à l’information sur les algorithmes : une étude empirique | Revue des droits et libertés fondamentaux

Just spent about an hour scrolling Mastodon. (Crazy Saturday night shenanigans). 😴

Clicked on a few things. Read an article. All good. 🤓

And I'm not angry at anything. ☺

Why aren't more media outlets here, I wonder? 🤔

#mastodon #algorithmictransparency #auspol

Germany tests algorithmic transparency through landmark enforcement cases: Four German legal actions against X, TikTok, Amazon, and Meta probe platform algorithms under DSA, GDPR, and AI Act, establishing precedents for democratic accountability. https://ppc.land/germany-tests-algorithmic-transparency-through-landmark-enforcement-cases/ #AlgorithmicTransparency #DigitalAccountability #GDPR #AIAct #DataProtection
Germany tests algorithmic transparency through landmark enforcement cases

Four German legal actions against X, TikTok, Amazon, and Meta probe platform algorithms under DSA, GDPR, and AI Act, establishing precedents for democratic accountability.

PPC Land

“How Algorithms Steer Your Feed Without Asking.”

Dr. Eslami digs into how algorithmic systems decide which creators, pages, and posts you see — even when you didn’t choose them.
We discuss transparency, attention capture, and how to stay aware of hidden influence.

🎧 Listen to the full episode: https://youtu.be/xTgzG04hyXI

#AlgorithmicTransparency #TechEthics #DigitalSovereignty #Podcast #theinternetiscrack

Many fights for #AlgorithmicTransparency focused on accessing code, but there’s more: Who builds these systems? Who are the providers? Are they independently evaluated? That’s why #AlgorithmRegisters are a key demand for accountability. So far, only France, Finland, the UK, Norway and Germany have official ones.

This week, #AutomatedSociety examines Europe's transparency laws; and how they fall short. Subscribe to our newsletter now: https://automatedsociety.algorithmwatch.org/#/en/

Automated Society - AlgorithmWatch

Get the briefing on how automated systems impact real people, in Europe and beyond.

AlgorithmWatch

Ah, algorithmic transparency, what I told the European Parliament was the next step they should take after GDPR. Better six years late than never. Hey, maybe in another decade they’ll even consider implementing a General Data Minimisation Regulation (GDMR).

https://ar.al/2019/11/29/the-future-of-internet-regulation-at-the-european-parliament/

https://ar.al/2018/11/29/gdmr-this-one-simple-regulation-could-end-surveillance-capitalism-in-the-eu/

#regulation #EU #GDPR #GDMR #algorithmicTransparency #dataMinimisation https://mamot.fr/@davduf/113845060326024724

The Future of Internet Regulation at the European Parliament

A brief write-up of my talk at the EU Parliament last week with embedded videos of my talk and a link to my slides.

Aral Balkan

"While the Executive Branch pushes agencies to leverage private AI expertise, our concern is that more and more information on how those AI models work will be cloaked in the nigh-impenetrable veil of government secrecy. Because AI operates by collecting and processing a tremendous amount of data, understanding what information it retains and how it arrives at conclusions will all become incredibly central to how the national security state thinks about issues. This means not only will the state likely make the argument that the AI’s training data may need to be classified, but they may also argue that companies need to, under penalty of law, keep the governing algorithms secret as well.

As the memo says, “AI has emerged as an era-defining technology and has demonstrated significant and growing relevance to national security. The United States must lead the world in the responsible application of AI to appropriate national security functions.” As the US national security state attempts to leverage powerful commercial AI to give it an edge, there are a number of questions that remain unanswered about how much that ever-tightening relationship will impact much needed transparency and accountability for private AI and for-profit automated decision making systems."

https://www.eff.org/deeplinks/2024/11/us-national-security-state-here-make-ai-even-less-transparent-and-accountable

#USA #CyberSecurity #Surveillance #AI #AlgorithmicTransparency

The U.S. National Security State is Here to Make AI Even Less Transparent and Accountable

As the US national security state attempts to leverage powerful commercial AI to give it an edge, there are a number of questions that remain unanswered about how much that ever-tightening relationship will impact much needed transparency and accountability for private AI and for-profit automated decision making systems.

Electronic Frontier Foundation