Le "Terminator" devient une réalité, mais ses créateurs craignent déjà un carnage incontrôlable (et le risque de bavure autonome est immense)

La guerre du futur s’écrit de plus en plus à l’encre de la technologie, en particulier à travers le développement de robots humanoïdes.

Sciencepost
Une IA peut-elle décider de tuer ? Le débat s’emballe et dépasse la SF. Malaise réel face à l’autonomie militaire.
https://www.ledevoir.com/economie/techno/966483/machine-elle-desormais-droit-vie-mort #Science #Innovation #AI #AutonomousWeapons #Ethics
La machine a-t-elle désormais le droit de vie et de mort?

Avec l’entrée en guerre de l’IA, la machine est en train de marginaliser la prise de décision humaine.

Le Devoir

AI is helping turn entire communities into targets.

Children are being hunted by drones, and war is moving beyond human control.

Join me in saying no to AI for warfare:
https://action.eko.org/a/no-ai-for-warfare

#NoAIWeapons #EndWar #AIEthics #AutonomousWeapons #HumanControl
#StopKillerRobots #TechEthics

No AI for warfare

Warmongers like Donald Trump and Benjamin Netanyahu are escalating wars around the world. And behind it all is a powerful AI-powered war machine.Children are being hunted by drones as AI kill lists tu

Ekō

After writing about how the US is using AI in the Iran conflict it has come to my attention that Ukraine is using fully autonomous #AI systems their own conflict against Russia.

https://www.xnite.me/ai/2026/03/15/ai-ukraine-russia-conflict.html

#Ukraine #UkraineWar #UkraineRussiaWar #Drone #Russia #RussiaIsATerroristState #Artificialintelligence #autonomy #autonomousweapons #war #conflict #news #worldnews #battletech

Ukraine's AI Drone Autonomy: From Human-Controlled to 'Fire and Forget' in the Russia War

In the grinding Ukraine-Russia conflict, drones with AI terminal guidance and last-mile autonomy are hitting 70-80% targets—pushing closer to true lethal autonomy than the US approach in Iran. Ethical red lines blur on the front lines.

xnite's Blog

Lukasz Olejnik (@lukOlejnik)

무력 충돌 법(law of armed conflict)은 항상 속도를 늦출 수 있다는 전제로 작성되었으나, AI는 그 선택지를 제거했다는 문제를 제기한다. 소프트웨어 엔지니어가 코드 리뷰 없이 수천 명에 영향을 주는 버그를 배포하면 과실로 본다. 그런데 그 '영향'이 사망이라면 법적·윤리적 책임을 어떻게 정의해야 하는지 질문을 던진다.

https://x.com/lukOlejnik/status/2031994007337722038

#aiethics #law #autonomousweapons #regulation

Lukasz Olejnik (@lukOlejnik) on X

The law of armed conflict was written assuming that slowing down was always an option. AI removed that option. If a software engineer ships a bug affecting a thousand users without code review, we call it negligence. What do we call it if the 'affected' is 'killed' and the

X (formerly Twitter)

Anthropic's positioning of usage red lines get a close examination in this piece https://www.lawfaremedia.org/article/the-situation--thinking-about-anthropic-s-red-lines and it is good.

Suggestions for refinements include adding more specificity to it's definition of "mass surveillance" and adding details scoping out the use cases it objects to.

Anthropic's arguments re "autonomous lethal warfare" could also be further clarified given its statements indicating research on autonomous systems is ok, but using current AI technology is not appropriate b/c it is not reliable enough.

So, the warfare red line is not a strict principle, it's statement of current technological limitations. #Anthropic #Claude #AI #RedLines #Lawsuit #Amodei #MassSurveillance #AutonomousWeapons #SupplyChainRisk #DoD #Military