I've always really liked this one, because it does not deny that war can sometimes be not merely morally JUST, but morally OBLIGATED (this one ISN'T, as we're even seeing reflected in TRUMP APPOINTEES IN SOME OF THE CLEAREST POSITIONS TO KNOW resigning in protest and blaming #IsraelAtWarCrimes), yet still emphasizes the human cost EVEN WHEN IT IS.

From this more jus ad bellum perspective, down to the jus in bello issues of approving (or rejecting!) a particular strike, I've always held that ETHICAL leadership in the #ProfessionOfArms demands not merely COMPLIANCE WITH THE LAW, but also--after that compliance has been shown--a frank reflection on whether the likely human cost on ALL sides IS WORTH IT.

As a Believer, I know that, on the Last Day, I am going to be called to account before my Creator for EVERY innocent life my decisions contributed to taking--however 'legally' (under MAN'S law) or accidentally. If I were ever recalled out of retirement to a strategic or operational role, I would ABSOLUTELY have this verse from the Qur'an framed and hung directly in front of my desk where I would see it every time I looked up:

"On that account: We ordained for the Children of Israel that if any one slew a person - unless it be for murder or for spreading mischief in the land - it would be as if he slew the whole people: and if any one saved a life, it would be as if he saved the life of the whole people. Then although there came to them Our messengers with clear signs, yet, even after that, many of them continued to commit excesses in the land." (Qur'an 5:32)

Ya Allah, may my work always be a restraint on those who would "continue to commit excesses in the land." 🤲

Sharing this because it is indeed a crucial topic for our deliberation--both inside the #ProfessionOfArms and across the populations to whom we are accountable.

I have long OPPOSED a blanket prohibition on lethal autonomous weapon systems, because to date ALL war crimes are committed BY HUMANS--often humans who are tired, hungry, scared, and/or just watched their best friend eviscerated by an enemy strike. As one of the panelists--it might have been "not that" George Lucas--at one of the ethics conferences I helped lead at the U.S. Naval War College a couple of decades ago noted, autonomous weapon systems wouldn't experience ANY of those things, and could have the Law of Armed Conflict hard-coded as their Prime Directive.

While that's still something we need to consider, today I'd have to support some fixed-length MORATORIUM ON use of autonomous weapons, to give both the technologies and our legal & accountability frameworks a chance to mature. We have seen all too many examples of TODAY'S artificial intelligence JUST MAKING SHIT UP, exhibiting the same biases as the humans (and/or datasets) that coded & trained them, and other equally-critical failings that could indeed render them unfit-for-purpose. And as Ayman Salama notes in one of the comments, we have clearly demonstrated that our legal frameworks and accountability mechanisms aren't yet up to the task of apportioning responsibility for any failures and delivering justice for their victims. Until those problems are convincingly resolved, prudence would dictate we do not proceed.

I'd urge you to weigh in--whoever you are--on the International Committee of the Red Cross - ICRC proposal. As my wife's research has emphasized, we don't get ethical results with our technologies by accident. We get what we DESIGN INTO THEM. And BOTH expert AND PUBLIC societal deliberation is a huge part of making that happen.

May Allah guide us. 🤲

https://www.icrc.org/en/article/artificial-intelligence-military-domain-icrc-submits-recommendations-un-secretary-general