AI is helping turn entire communities into targets.

Children are being hunted by drones, and war is moving beyond human control.

Join me in saying no to AI for warfare:
https://action.eko.org/a/no-ai-for-warfare

#NoAIWeapons #EndWar #AIEthics #AutonomousWeapons #HumanControl
#StopKillerRobots #TechEthics

No AI for warfare

Warmongers like Donald Trump and Benjamin Netanyahu are escalating wars around the world. And behind it all is a powerful AI-powered war machine.Children are being hunted by drones as AI kill lists tu

Ekō

After writing about how the US is using AI in the Iran conflict it has come to my attention that Ukraine is using fully autonomous #AI systems their own conflict against Russia.

https://www.xnite.me/ai/2026/03/15/ai-ukraine-russia-conflict.html

#Ukraine #UkraineWar #UkraineRussiaWar #Drone #Russia #RussiaIsATerroristState #Artificialintelligence #autonomy #autonomousweapons #war #conflict #news #worldnews #battletech

Ukraine's AI Drone Autonomy: From Human-Controlled to 'Fire and Forget' in the Russia War

In the grinding Ukraine-Russia conflict, drones with AI terminal guidance and last-mile autonomy are hitting 70-80% targets—pushing closer to true lethal autonomy than the US approach in Iran. Ethical red lines blur on the front lines.

xnite's Blog

Lukasz Olejnik (@lukOlejnik)

무력 충돌 법(law of armed conflict)은 항상 속도를 늦출 수 있다는 전제로 작성되었으나, AI는 그 선택지를 제거했다는 문제를 제기한다. 소프트웨어 엔지니어가 코드 리뷰 없이 수천 명에 영향을 주는 버그를 배포하면 과실로 본다. 그런데 그 '영향'이 사망이라면 법적·윤리적 책임을 어떻게 정의해야 하는지 질문을 던진다.

https://x.com/lukOlejnik/status/2031994007337722038

#aiethics #law #autonomousweapons #regulation

Lukasz Olejnik (@lukOlejnik) on X

The law of armed conflict was written assuming that slowing down was always an option. AI removed that option. If a software engineer ships a bug affecting a thousand users without code review, we call it negligence. What do we call it if the 'affected' is 'killed' and the

X (formerly Twitter)

Anthropic's positioning of usage red lines get a close examination in this piece https://www.lawfaremedia.org/article/the-situation--thinking-about-anthropic-s-red-lines and it is good.

Suggestions for refinements include adding more specificity to it's definition of "mass surveillance" and adding details scoping out the use cases it objects to.

Anthropic's arguments re "autonomous lethal warfare" could also be further clarified given its statements indicating research on autonomous systems is ok, but using current AI technology is not appropriate b/c it is not reliable enough.

So, the warfare red line is not a strict principle, it's statement of current technological limitations. #Anthropic #Claude #AI #RedLines #Lawsuit #Amodei #MassSurveillance #AutonomousWeapons #SupplyChainRisk #DoD #Military

Wer ist rechtlich verantwortlich für die Folgen des Einsatzes autonomer Waffensysteme? Dazu ist eine völkerrechtliche Betrachtung des «Lieber Institute» erschienen.
➡️ https://lieber.westpoint.edu/legal-accountability-ai-driven-autonomous-weapons/ («Legal Accountability for AI-Driven Autonomous Weapons»)
➡️ https://roter-kreis.de/Humanit%C3%A4res_V%C3%B6lkerrecht?utm_source=mastodon&utm_medium=social&utm_campaign=blog (Enzyklopädie: Humanitäres Völkerrecht)

#Völkerrecht #HumanitäresVölkerrecht #HVR #InternationalLaw #IHL #KI #AI #AutonomeWaffensysteme #AutonomousWeapons #LieberInstitute #Westpoint

Legal Accountability for AI-Driven Autonomous Weapons

The rise of AI-driven autonomous weapon systems is forcing a re-examination of some of the most basic principles of IHL.

Lieber Institute West Point

#Anthropic is suing the #Trump admin, asking federal courts to reverse the #Pentagon’s decision designating the #AI company a “supply chain risk” over its refusal to allow unrestricted #military use of its #tech.

Anthropic filed 2 separate suits Monday, one in California federal court & another in the federal appeals court in Washington, DC, challenging different aspects of the Pentagon’s actions against the company.

#law #surveillance #AutonomousWeapons
https://apnews.com/article/anthropic-trump-pentagon-hegseth-ai-104c6c39306f1adeea3b637d2c1c601b?utm_source=onesignal&utm_medium=push&utm_campaign=2026-03-09-Breaking+News

Anthropic seeks to undo 'supply chain risk' designation from Trump administration

Anthropic is suing the Trump administration, asking federal courts to reverse the Pentagon’s decision designating the artificial intelligence company a “supply chain risk” over its refusal to allow unrestricted military use of its technology. Anthropic filed two separate lawsuits Monday, one in California federal court and another in the federal appeals court in Washington, D.C., each challenging different aspects of the Pentagon’s actions against the company. The Pentagon last week formally designated the San Francisco tech company a supply chain risk after an unusually public dispute over how its AI chatbot Claude could be used in warfare. The lawsuits aim to undo the designation and block its enforcement.

AP News