
As the U.S. military expands its use of AI tools to pinpoint targets for airstrikes in Iran, members of Congress are calling for guardrails and greater oversight of the technology’s use in war
It's not exactly open information, but Claude probably was involved in sorting out data in conjunction with Palantir's Maven, which presents targets for review. Apparently it is used to speed-up Maven's work. Human review is supposed to happen afterwards.
Claude has a kind of "constitutional governance" built-in, which is different from some of the other AIs.
There is a stripped-down version for DoD.
https://storage.courtlistener.com/recap/gov.uscourts.cand.465515/gov.uscourts.cand.465515.1.0_5.pdf
@vyllenjamnin @jacqueline they really aren't "just tools". This analogy is just wrong and extremely misleading.
They are services provided by an organization that have politics embedded into them.
A pen doesn't influence WHAT you're writing. An LLM's training process, which is controlled and managed by people with certain politics, very much influences what it's output will be.
RE: https://mastodon.social/@glyph/116220202738664759
@aesthr @vyllenjamnin @jacqueline Exactly right - see also this thread by @glyph 👇
@aparrish my point is that people can use any pen to write any words they want.
No pen (not even that hypothetical one formerly used by Nazis) will prevent you from writing a certain sequence of words, or from writing about a certain topic. No pen is going to stop putting ink on the paper when your words conflict with some corporate content guideline, or if you write something illegal. No pen is going to write words that you didn't decide to write.
Generative "AI" does all of those things.
@aparrish i never said that tools in general don’t have politics embedded in them. Yet you went on an unsolicited lecture about it instead of engaging with what I was originally talking about.
It’s arrogant and condescending. Now leave me the fuck alone
@aparrish eh, you were fine. This guy was just being a douche.
I'd double-down and say that all tools shape behavior. Some more than others. Some better than others.
@unionwhore @aparrish it was unintentional. I apologize.
I didn't see any preferred pronuns on their profile.
You're probably paying for the tool, or at least contributing to the so-called valuation of the company providing it, hence providing the company with resources to provide child killing services elsewhere.
@vyllenjamnin while I get your point, I think AIs (and of course people/companies behind them) are not as simple a tool as a pen. So I tend not to agree with your analogy in this case.
Uh.
They (famously or not, but reported in the news) had a huge contract, which fell apart because the extremely low bar they set for "nope" set off the administration, and so OpenAI swooped in with, apparently, "ooh, ooh, we have no semblance of even the lowest of moral standards! pick us!"
Citation:
https://www.npr.org/2026/02/27/nx-s1-5729118/trump-anthropic-pentagon-openai-ai-weapons-ban
@jacqueline For anyone questioning if Claude was used for selecting targets during the US attack on Iran, including bombing a school, here are some sources showing that Claude was used (and is still in use):
https://www.cbsnews.com/news/anthropic-claude-ai-iran-war-u-s/
https://www.theguardian.com/technology/2026/mar/01/claude-anthropic-iran-strikes-us-military

Two sources familiar with the U.S. military's use of artificial intelligence confirm that the U.S. used Anthropic's Claude AI model over weekend for the attack on Iran — and is still using it.
@jacqueline You know, they frame stuff this poorly so I'll do it too.
As a treat.