Senate Democrats Seek to Ban Military AI Weapons, Mass Surveillance
#AI #AISafety #AIRegulation #AIEthics #AutonomousWeapons #Surveillance #Military #DefenseTech #NationalSecurity #USGovernment #HumanRights #Pentagon
Senate Democrats Seek to Ban Military AI Weapons, Mass Surveillance
#AI #AISafety #AIRegulation #AIEthics #AutonomousWeapons #Surveillance #Military #DefenseTech #NationalSecurity #USGovernment #HumanRights #Pentagon
AI is helping turn entire communities into targets.
Children are being hunted by drones, and war is moving beyond human control.
Join me in saying no to AI for warfare:
https://action.eko.org/a/no-ai-for-warfare
#NoAIWeapons #EndWar #AIEthics #AutonomousWeapons #HumanControl
#StopKillerRobots #TechEthics
After writing about how the US is using AI in the Iran conflict it has come to my attention that Ukraine is using fully autonomous #AI systems their own conflict against Russia.
https://www.xnite.me/ai/2026/03/15/ai-ukraine-russia-conflict.html
#Ukraine #UkraineWar #UkraineRussiaWar #Drone #Russia #RussiaIsATerroristState #Artificialintelligence #autonomy #autonomousweapons #war #conflict #news #worldnews #battletech

In the grinding Ukraine-Russia conflict, drones with AI terminal guidance and last-mile autonomy are hitting 70-80% targets—pushing closer to true lethal autonomy than the US approach in Iran. Ethical red lines blur on the front lines.
American Humanoid Robots to Battle Russian Soldiers on Ukrainian Frontline
#UkraineWar #RussiaUkraineWar #MilitaryRobotics #RobotSoldiers #HumanoidRobots #DefenseTechnology #AutonomousWeapons #FutureWarfare #AIDrones #MilitaryInnovation
https://winbuzzer.com/2026/03/12/microsoft-backs-anthropic-against-pentagon-ban-xcxwbn/
Microsoft Backs Anthropic Against Pentagon Ban
#AI #Anthropic #Microsoft #BigTech #Military #NationalSecurity #DefenseTech #TrumpAdministration #Claude #AutonomousWeapons #Pentagon
Lukasz Olejnik (@lukOlejnik)
무력 충돌 법(law of armed conflict)은 항상 속도를 늦출 수 있다는 전제로 작성되었으나, AI는 그 선택지를 제거했다는 문제를 제기한다. 소프트웨어 엔지니어가 코드 리뷰 없이 수천 명에 영향을 주는 버그를 배포하면 과실로 본다. 그런데 그 '영향'이 사망이라면 법적·윤리적 책임을 어떻게 정의해야 하는지 질문을 던진다.

The law of armed conflict was written assuming that slowing down was always an option. AI removed that option. If a software engineer ships a bug affecting a thousand users without code review, we call it negligence. What do we call it if the 'affected' is 'killed' and the
Anthropic's positioning of usage red lines get a close examination in this piece https://www.lawfaremedia.org/article/the-situation--thinking-about-anthropic-s-red-lines and it is good.
Suggestions for refinements include adding more specificity to it's definition of "mass surveillance" and adding details scoping out the use cases it objects to.
Anthropic's arguments re "autonomous lethal warfare" could also be further clarified given its statements indicating research on autonomous systems is ok, but using current AI technology is not appropriate b/c it is not reliable enough.
So, the warfare red line is not a strict principle, it's statement of current technological limitations. #Anthropic #Claude #AI #RedLines #Lawsuit #Amodei #MassSurveillance #AutonomousWeapons #SupplyChainRisk #DoD #Military
https://winbuzzer.com/2026/03/10/openai-google-employees-back-anthropics-pentagon-lawsuit-xcxwbn/
OpenAI and Google Employees Back Anthropic's Pentagon Lawsuit
#AI #Anthropic #Google #BigTech #OpenAI #Claude #AISafety #AIEthics #AIRegulation #USDepartmentOfWar #Military #Lawsuits #DefenseTech #AutonomousWeapons #MilitaryContracts