Anthropic says it ‘cannot in good conscience’ allow Pentagon to remove AI checks

https://lemmus.org/post/20477411

How is a private company the voice of reason in this?
Anthropic was founded by former OpenAI employees who left largely due to ethical and safety concerns about how OpenAI was being run. This is just them sticking to their principles.

This is just them sticking to their principles.

Can’t say the evidence really backs you up on that one.

cbc.ca/…/anthropic-ai-safety-committments-9.71073…

www.bbc.com/news/articles/c62dlvdq3e3o

Anthropic, the AI company with a safety-first reputation, is changing a core guardrail | CBC News

Anthropic, the AI company behind the Claude chatbot that was founded with a focus on safe technology, appears to be scaling back its safety commitments in order to keep the company competitive, after it amended a set of self-imposed guidelines aimed at preventing the development of AI that could potentially be dangerous.

CBC

I still think they deserve some credit for at least trying to do the right thing. I don’t envy the position they’re in.

Everyone’s rushing toward AGI. Trying to do it safely is meaningless if your competition - the ones who don’t care about safety - gets there first. You can slow things down if you’re in the lead, but if you’re second best, it’s just posturing. There is no second place in this race.

Anthropic’s CEO admits compromising with authoritarian regimes to secure AI funding

Anthropic CEO Dario Amodei admits his company is making compromises with authoritarian regimes in the race to build advanced AI.

The Decoder
No AI bro company is on the path to AGI. Transformer technology will not lead to AGI.
I never claimed it will.