u programmed this with claude? the ai platform for bombing schools?
@jacqueline
I've tried. But I can't get bots which run on the fediverse to have good enough targeting algorithms. 🎯
@jacqueline I don't think they used Claude for that, more like ChatGPT and not checking the phone book.
U.S. military is using AI to help plan Iran air attacks, sources say, as lawmakers call for oversight

As the U.S. military expands its use of AI tools to pinpoint targets for airstrikes in Iran, members of Congress are calling for guardrails and greater oversight of the technology’s use in war

NBC News
@allanb @jacqueline turns out it was literally Claude for target selection

@davidgerard @jacqueline

It's not exactly open information, but Claude probably was involved in sorting out data in conjunction with Palantir's Maven, which presents targets for review. Apparently it is used to speed-up Maven's work. Human review is supposed to happen afterwards.

Claude has a kind of "constitutional governance" built-in, which is different from some of the other AIs.

There is a stripped-down version for DoD.

https://www.the-independent.com/news/world/americas/us-politics/iran-school-attack-ai-investigation-b2937456.html

https://storage.courtlistener.com/recap/gov.uscourts.cand.465515/gov.uscourts.cand.465515.1.0_5.pdf

Old intelligence and AI? Behind the deadly attack on an Iranian girls’ school that left 175 dead

Key questions emerge around AI use and human errors in the Trump administration’s rush to strike Iran

The Independent
@jacqueline Claude's actually the one who refused to do that, unless I'm missing something? ChatGPT is the school-bomber.
@kinosian @jacqueline Anthropic got the headline they wanted for “refusing” but they are in use for target selection and planning via Palantir. https://www.theguardian.com/technology/2026/mar/01/claude-anthropic-iran-strikes-us-military
US military reportedly used Claude in Iran strikes despite Trump’s ban

Trump calls Anthropic a ‘Radical Left AI company run by people who have no idea what the real World is all about’

The Guardian
@ottumm @jacqueline Sorry to hear that, but thank you for letting me know!
@jacqueline imh this is the same as saying "You are using the same pen as they did sign the Nuremberg laws with". People are accountable for these atrocities, AIs are just another (problematic) tool enabling bad people to do bad things more efficiently. Don't forget who is truly responsible here.

@vyllenjamnin @jacqueline they really aren't "just tools". This analogy is just wrong and extremely misleading.

They are services provided by an organization that have politics embedded into them.

A pen doesn't influence WHAT you're writing. An LLM's training process, which is controlled and managed by people with certain politics, very much influences what it's output will be.

@aesthr @vyllenjamnin @jacqueline plus (a) pens *do* influence how and what we write (think of how ball point pens made cursive writing obsolete) and have political/cultural meaning (e.g. trump's executive order sharpies); and (b) uhhh if someone consciously decided to write something with the pen they wrote the nuremberg laws with, it would *absolutely* shape the way i interpret what they wrote. all tools are political, especially writing tools
@aparrish you're really missing the point here
@aesthr my intention was to agree with you and to support your underlying point...? even people who design pens (or other writing tools, from movable type to letraset on up the chain to word processors and llms) have political goals (implicit or explicit) that influence how people engage in communication with those tools
@aesthr (or maybe more clearly: the medium is the message, and making a clean distinction between the "form" and "content" of language is already conceding to the politics and logic of llm-pushers)

@aparrish my point is that people can use any pen to write any words they want.

No pen (not even that hypothetical one formerly used by Nazis) will prevent you from writing a certain sequence of words, or from writing about a certain topic. No pen is going to stop putting ink on the paper when your words conflict with some corporate content guideline, or if you write something illegal. No pen is going to write words that you didn't decide to write.

Generative "AI" does all of those things.

@aesthr i agree that generative AI does all of those things. but that's part of the reason that the context of language (including the tools used to produce it) matters. two stretches of text might have the same words, but the knowledge that one stretch was written by/with llms changes how i interpret it. and pens DO limit what can be written with them—think of the difficulty of writing arabic calligraphy with a ballpoint, or how the mongolian script was dropped in favor of cyrillic in mongolia
@aesthr (think also about how certain pens often used for graffiti are kept under lock and key in art stores and require 18+ ID. those pens make it possible to write in a certain way, and that writing is politicized to the point that the tool itself is regulated)
@aparrish you’re still missing the point here about tools vs services, about means of production. and I get the feeling you’re just trying to be contrarian for its own sake
@aesthr i'm not trying to be contrarian for its own sake. this line of reasoning is very important in my own justification for rejecting AI tools. i strongly agree with your original reply in the thread. all i'm trying to get at is that many of the same critiques we have about how llms have politics and shape the way we write and the horizons of expression *also* apply to other writing tools, even non-computational ones. and we can learn valuable lessons from the criticisms of those other tools

@aparrish i never said that tools in general don’t have politics embedded in them. Yet you went on an unsolicited lecture about it instead of engaging with what I was originally talking about.

It’s arrogant and condescending. Now leave me the fuck alone

@aesthr oh no! i'm very sorry—i should had dropped it earlier. i had genuinely believed that i was engaging with what you were saying (and supporting/agreeing with it!) but i understand that it didn't come off that way. i'll do better in the future!

@aparrish eh, you were fine. This guy was just being a douche.

I'd double-down and say that all tools shape behavior. Some more than others. Some better than others.

@herrold @aparrish Heyy even if I agree with her behaving douchy, please don't call her a guy if her pronouns imply otherwise and as far as I've seen (which is just skimming over her profile) she never indicated being comfortable with being called a guy.

@unionwhore @aparrish it was unintentional. I apologize.

I didn't see any preferred pronuns on their profile.

@herrold @aparrish maybe it's a federation issue. If that happens you can usually few the profile by opening it in a browser.
@aesthr @jacqueline you are right! It is not just a tool, it is strongly influenced by corporate interests and built on hugely stolen work. Maybe I made my point badly. We should not forget that people are still responsible! Let's not shift focus away from whomever eventually signed off on this decision.

@vyllenjamnin @jacqueline

You're probably paying for the tool, or at least contributing to the so-called valuation of the company providing it, hence providing the company with resources to provide child killing services elsewhere.

@vyllenjamnin while I get your point, I think AIs (and of course people/companies behind them) are not as simple a tool as a pen. So I tend not to agree with your analogy in this case.

@jacqueline

@vyllenjamnin @jacqueline Bad analogy. A pen isn't pushing anyone to do war crimes.
@jacqueline Not advocating for any of them, but Anthropic famously refused to work with the Department of War. I think you were thinking of OpenAI

@vixalientoots @jacqueline

Uh.

They (famously or not, but reported in the news) had a huge contract, which fell apart because the extremely low bar they set for "nope" set off the administration, and so OpenAI swooped in with, apparently, "ooh, ooh, we have no semblance of even the lowest of moral standards! pick us!"

Citation:

https://www.npr.org/2026/02/27/nx-s1-5729118/trump-anthropic-pentagon-openai-ai-weapons-ban

@vixalientoots Anthropic were more than happy to work with the Department of War Crimes, they just didn't think that the models were ready to be used in fully autonomous weapons... *yet*.
@jacqueline nice try to dodge responsibility.
U elect a president which bombs schools and blame a computer instead.
Are you also blaming optics for shooting in the schools? whitewashing the murderers and even their weapons.
@ruff I'm going to take a wild guess here and say that @jacqueline did not vote for Trump.
@j3rn @jacqueline oh ok, that's totally changing everything. So let's blame the computer, yea, show him. make computer dumb again!

@jacqueline For anyone questioning if Claude was used for selecting targets during the US attack on Iran, including bombing a school, here are some sources showing that Claude was used (and is still in use):

https://www.cbsnews.com/news/anthropic-claude-ai-iran-war-u-s/
https://www.theguardian.com/technology/2026/mar/01/claude-anthropic-iran-strikes-us-military

Anthropic's Claude AI being used in Iran war by U.S. military, sources say

Two sources familiar with the U.S. military's use of artificial intelligence confirm that the U.S. used Anthropic's Claude AI model over weekend for the attack on Iran — and is still using it.

@jacqueline that sure got some rancid replies
@s0 @jacqueline It's the very nature of any "AI" discussion. Somebody makes a clear, concise criticism of the technology and then the apologists and "I wrote my dissertation with chat gippity" simps pile on.

@jacqueline You know, they frame stuff this poorly so I'll do it too.

As a treat.

@jacqueline Do you live in a country that has directly or indirectly contributed to war, either by participating, supporting, or selling weapons in general? Because for your logic to hold true you probably live in one of about 7 countries with no standing army.