@craigbro
Imagine a hypothetical user whose main argument was that they don't want to be in any way associated with tools which facilitate U.S. wars, up to and including selecting which human beings to bomb.
Compared to the mainstream view, that would perhaps be a hard line from this user. But, it would arguably be based in reality.
In June of 2025, four execs - one from each of Meta, OpenAI, Palantir, and Thinking Machines Lab, got sworn in as Lieutenant Colonels of the US military.
So that's OpenAI, the leader in this space. Anthropic's Claude, the other big player, is all over the news this last couple of weeks for being central to the ongoing operation in Iran, used alongside Palantir's Project Maven for selecting targets, and other intelligence work.
There are plenty of articles on both these points; it's not like the companies are hiding their joy at receiving these lucrative military contracts, and deepening ties.
If a user decided they want nothing to do with OpenAI and Anthropic for this reason, and would therefore like to try do their computing as far away from these projects as possible, and they state this publically...
That user is, without exception, a "reactionary twat"?
It reads like the answer would be yes, which would seem pretty wild to me, but perhaps I'm misreading.
Obviously, some users (well, some people) are reactionary twats. But I don't see how we could say that *all* users who have ethical issues with these developments are, no matter what their reasons... without being extremely reactionary, of course 😅