Anthropic is different because they're committed to safety.

...I'm being told they are no longer committed to safety.

https://time.com/7380854/exclusive-anthropic-drops-flagship-safety-pledge/

Exclusive: Anthropic Drops Flagship Safety Pledge

In an abrupt shift, the company may release future AI models without ironclad safety guarantees

Time
@mttaggart Just in time for them to make a sale to the Pentagon, which wants fewer safeguards and also threatened Anthropic if they didn't drop the existing safeguards they do have (lol): https://www.theguardian.com/us-news/2026/feb/24/anthropic-claude-military-ai
US military leaders pressure Anthropic to bend Claude safeguards

Anthropic presents itself as most safety-forward AI firm and Pentagon has threatened penalties if it does not yield

The Guardian
@theorangetheme If they fold on that one, they deserve all the excoriation they'll get. You can't go "We won't let you build a murderbot with our AI" and then later go "Actually murderbots are fine."
@mttaggart I'm not even sure what the military would do with Claude anyway. They have the same number of guns as before, but I guess now they can write slop reports faster?
@mttaggart Cynical: a human can still show up and violate my rights more effectively than AI.