Anthropic is different because they're committed to safety.

...I'm being told they are no longer committed to safety.

https://time.com/7380854/exclusive-anthropic-drops-flagship-safety-pledge/

Exclusive: Anthropic Drops Flagship Safety Pledge

In an abrupt shift, the company may release future AI models without ironclad safety guarantees

Time
Remember kids, any promise from corpos to stop doing a thing that is making money is a bald-faced lie.
@mttaggart
Fixed:
Remember kids, any promise from corpos is a bald-faced lie.
@mttaggart Because the first law of aibotics is An AIbot may not injure its grifters' quarterly cash flow or, by inaction, allow said cash flow to come to harm before the quarterly report is out.

@mttaggart I'm an optimist. Sometimes they believe what they say and then are fired.

Or they just become a different person. A person that wants money more.

@NegativeK @mttaggart right, but if someone getting fired, or even just promoted sideways, can result in a policy reversal, then that means *all* corporate promises are worthless right from day one. It doesn't matter whether some individual in the company *meant it*. It's systemically impossible for the company as a whole to stand by its word.
@mttaggart you're right, putting the crash-out in the afternoon *is* a productivity enhancer
@mttaggart Just in time for them to make a sale to the Pentagon, which wants fewer safeguards and also threatened Anthropic if they didn't drop the existing safeguards they do have (lol): https://www.theguardian.com/us-news/2026/feb/24/anthropic-claude-military-ai
US military leaders pressure Anthropic to bend Claude safeguards

Anthropic presents itself as most safety-forward AI firm and Pentagon has threatened penalties if it does not yield

The Guardian
@theorangetheme If they fold on that one, they deserve all the excoriation they'll get. You can't go "We won't let you build a murderbot with our AI" and then later go "Actually murderbots are fine."
@mttaggart I'm not even sure what the military would do with Claude anyway. They have the same number of guns as before, but I guess now they can write slop reports faster?
@mttaggart Cynical: a human can still show up and violate my rights more effectively than AI.

@theorangetheme @mttaggart

Autonomous death robots for plausible deniability during any future Nuremberg-like trials.

@theorangetheme @mttaggart Wait until some brilliant mind decides to wires those triggers to some agentic AI bot...

@theorangetheme @mttaggart

"OK, Claude. Whenever you see an enemy plane, I want you to shoot it down without asking me first."

"OK, Claude. Whenever you see a Palestinian, I want you to highlight that person in my HUD. This applies especially if the person is trying to hide from me."

@mttaggart

Thinking about how Google quietly removed any mention of "Don't be evil"
@rachel @mttaggart now they're all "we can be evil, it's fine, what are you going to do about it at this point"

@rachel @mttaggart I think this bugs me in the same way as politicians having stopped pretending to not be cartoonishly evil.

Obviously it's terrible that <entity> is doing bad things, but it felt nicer to live in a world where they felt that they had to lie about it by claiming they weren't doing the bad thing.

@jwdt @mttaggart Yeah...

In the past, I had joked that Google's "Don't Be Evil" was a warrant canary, but even that is an idealistic idea, implying that there was a large org like that which ever really was "one of the good ones"

@rachel @mttaggart I don't even know how you could parody something like this.

"Corporation removes (don't be evil | we value safety) from their website"... I'd expect that to end with "after their CEO mysteriously vanished and was replaced by Dr Evil".

@mttaggart

Huh. They went from zero to evil in 0.05 googles.

@mttaggart

Anthropic’s chief science officer Jared Kaplan told TIME in an exclusive interview. “We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments … if competitors are blazing ahead.”

Kaplan, the Anthropic executive and co-founder, denied the company’s decision to change course was a capitulation to market incentives as the race for superintelligence accelerates.

Amazing. You can literally state this as
"While we're doing this because other people in the market can move faster if they don't also have this restriction, meaning the market is providing incentives to remove this restriction, we categorically deny that removing this restriction has anything to do with market incentives."

@mttaggart Didn’t the US DOD threaten them with contract loss if they don’t agree to their models being used to kill people?

I would guess they would have to go pretty deep into the core of their reasoning training material to make the models not refuse hurting humans. 🤔

@mttaggart @isotopp So it goes. And also No shit Sherlock
@mttaggart I would ALWAYS bet against Anthropic, just for their Effective Altruism DNA alone.
@codinghorror @mttaggart "That's a nice company you've got there. Would be a shame if something were to happen to it. Can you just change this one thing?" https://www.axios.com/2026/02/24/anthropic-pentagon-claude-hegseth-dario
Exclusive: Hegseth gives Anthropic until Friday to back down on AI safeguards

"The only reason we're still talking to these people is we need them and we need them now."

Axios

@mttaggart "“We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments … if competitors are blazing ahead.”

even if those "blazing ahead" are actually creating loads of fires???

@mttaggart From Hegseth‘s ultimatum in the morning to them complying after 4 corporate hours…that‘s like 40 human years of strongly worded dissent.
@mttaggart They are a public benefit corporation in a state (Delaware) where they aren't forced to actually do anything to benefit the public. There never was reason to trust them. https://www.wheresyoured.at/premium-the-haters-guide-to-anthropic/
Premium: The Hater's Guide to Anthropic

In May 2021, Dario Amodei and a crew of other former OpenAI researchers formed Anthropic and dedicated themselves to building the single-most-annoying Large Language Model company of all time.  Pardon me, sorry, I mean safest, because that’s the reason that Amodei and his crew claimed was why they left

Ed Zitron's Where's Your Ed At

@mttaggart These are such nice people. I'd love to share a cup of tea with them.

The new version of the policy (...) promises to “delay” Anthropic's AI development if leaders both consider Anthropic to be leader of the AI race and think the risks of catastrophe to be significant.

(...)

The arrival of powerful new models meant that, in 2025, Anthropic announced it could not rule out the possibility of these models facilitating a bio-terrorist attack. But while they couldn't rule it out (...)

@mttaggart They want in on the Hegseth DoW action.t
@mttaggart
So anthropic's don't "don't be evil" moment? I'm shocked.
@mttaggart The part of that that upsets me the most is their justification. I get it was always a lie and they were always going to do whatever the fuck (they were already working with Palantir before dropping that pledge), but they seem to think that the inevitability argument holds water with folks. They're probably even right about that, which makes it extra upsetting...

@mttaggart

all tech seems to go with the google evolution:

- do no evil
- try really hard not to do evil
- try to avoid being caught doing evil
- fuck it. full evil it is