Anthropic is different because they're committed to safety.

...I'm being told they are no longer committed to safety.

https://time.com/7380854/exclusive-anthropic-drops-flagship-safety-pledge/

Exclusive: Anthropic Drops Flagship Safety Pledge

In an abrupt shift, the company may release future AI models without ironclad safety guarantees

Time
@mttaggart Just in time for them to make a sale to the Pentagon, which wants fewer safeguards and also threatened Anthropic if they didn't drop the existing safeguards they do have (lol): https://www.theguardian.com/us-news/2026/feb/24/anthropic-claude-military-ai
US military leaders pressure Anthropic to bend Claude safeguards

Anthropic presents itself as most safety-forward AI firm and Pentagon has threatened penalties if it does not yield

The Guardian
@theorangetheme If they fold on that one, they deserve all the excoriation they'll get. You can't go "We won't let you build a murderbot with our AI" and then later go "Actually murderbots are fine."
@mttaggart I'm not even sure what the military would do with Claude anyway. They have the same number of guns as before, but I guess now they can write slop reports faster?
@mttaggart Cynical: a human can still show up and violate my rights more effectively than AI.

@theorangetheme @mttaggart

Autonomous death robots for plausible deniability during any future Nuremberg-like trials.

@theorangetheme @mttaggart Wait until some brilliant mind decides to wires those triggers to some agentic AI bot...

@theorangetheme @mttaggart

"OK, Claude. Whenever you see an enemy plane, I want you to shoot it down without asking me first."

"OK, Claude. Whenever you see a Palestinian, I want you to highlight that person in my HUD. This applies especially if the person is trying to hide from me."