Anthropic says it ‘cannot in good conscience’ allow Pentagon to remove AI checks
Anthropic says it ‘cannot in good conscience’ allow Pentagon to remove AI checks
This is just them sticking to their principles.
Can’t say the evidence really backs you up on that one.

Anthropic, the AI company behind the Claude chatbot that was founded with a focus on safe technology, appears to be scaling back its safety commitments in order to keep the company competitive, after it amended a set of self-imposed guidelines aimed at preventing the development of AI that could potentially be dangerous.
I still think they deserve some credit for at least trying to do the right thing. I don’t envy the position they’re in.
Everyone’s rushing toward AGI. Trying to do it safely is meaningless if your competition - the ones who don’t care about safety - gets there first. You can slow things down if you’re in the lead, but if you’re second best, it’s just posturing. There is no second place in this race.