#Anthropic once promised us they would halt models if they lost faith in their ability to control them. The Pentagon forced them to drop that and other guardrails under thread of cancelling contract.
My money is still on Open.AI becoming the Enron of the era but knowing every model is now all gas, no breaks, towards bleak and unprepared futures? I think all of us in technology or who were around for the most recent financial crisis know the damage will be done by then and it is currently incalculable.

https://www.cnn.com/2026/02/25/tech/anthropic-safety-policy-change

Anthropic ditches its core safety promise in the middle of an AI red line fight with the Pentagon

Anthropic, a company founded by OpenAI exiles worried about the dangers of AI, is loosening its core safety principle in response to competition.

CNN