Technē without safety guardrails?
* "The public showdown between the Department of Defense and Anthropic began earlier this week after they entered into discussions about the military’s use of the company’s Claude AI system. But the talks broke down as both sides appeared to be unable to come to agreement over safety guardrails."
"US defense officials have pushed for unfettered access to Claude’s capabilities that they say can help protect the country, while Anthropic has resisted allowing its product to be used for mass surveillance or autonomous weapons systems that can kill people without human input." >>
https://www.theguardian.com/us-news/2026/feb/27/trump-anthropic-ai-federal-agencies
* The Pentagon strongarmed AI firms before Iran strikes – in dark news for the future of ‘ethical AI’ >>
https://theconversation.com/the-pentagon-strongarmed-ai-firms-before-iran-strikes-in-dark-news-for-the-future-of-ethical-ai-277198
* Who decides when a machine kills? When private companies are enforcing ethical constraints and governments are not, something is very wrong >>
https://www.euractiv.com/opinion/who-decides-when-a-machine-kills/
#ethics #OpenAI #BigTech #surveillance #AutonomousWeapons #ADM #war #KillerRobots #LAWs #Google #LLMs #Claude #Anthropic #transparency #accountability #AutomatedDecisionMaking #algorithms #AlgorithmicTransparency



