Pentagon Blocks Anthropic Over Unreliable AI Controls

The Department of Defense has blocked AI firm Anthropic from military use, citing concerns that its models can't be reliably controlled - a critical issue when it comes to high-stakes decision-making. This move comes after internal memos raised red flags about Anthropic's AI controls and what one Pentagon memo described as a…

https://osintsights.com/pentagon-blocks-anthropic-over-unreliable-ai-controls?utm_source=mastodon&utm_medium=social

#MilitaryAi #ArtificialIntelligence #DepartmentOfDefense #EmergingThreats #AiControls

Pentagon Blocks Anthropic Over Unreliable AI Controls

Pentagon blocks Anthropic over unreliable AI controls, learn why DoD took this drastic step against AI firm Anthropic now.

OSINTSights

undefined | Anthropic Supply-Chain Risk Label Should Stay In Place, Appeals Court Says by Paresh Dave

Anthropic Inc. lost a bid to have the Pentagon’s “supply‑chain‑risk” label temporarily removed when a three‑judge U.S. Court of Appeals in Washington, DC, ruled on Wednesday that the AI firm had not satisfied the strict requirements for relief. The appellate decision directly contradicts a San Francisco district‑court ruling issued a month earlier, which found the Department of Defense had acted in bad faith and ordered the label erased, prompting the Trump administration to restore Anthropic’s access to its Claude models across the federal government. The appellate panel stressed that, even if the designation harms Anthropic financially, overturning it could force the military to continue working with a vendor it deems unsafe during an ongoing conflict.

The two courts are each addressing a different legal mechanism the Pentagon used to impose the same practical restriction, making Anthropic the first U.S. company flagged under both supply‑chain‑risk statutes that are normally reserved for foreign entities deemed national‑security threats. Anthropic argues that the label unfairly penalizes it for refusing to let its Claude system be used in high‑risk operations—such as fully autonomous drone strikes—without human oversight, and it claims the government’s action has cost it lucrative contracts. The company’s spokesperson, Danielle Cohen, said the Washington decision shows the urgency of the issue and expressed confidence that the courts will eventually deem the designations unlawful, while the DoD has not commented.

Legal scholars note that the case tests the limits of executive power over technology firms and could set a precedent for how the government regulates AI in defense. Experts familiar with government contracting say Anthropic has a strong argument, yet courts often defer to the White House on national‑security matters, a deference that could chill broader debate about AI performance and safety. Final rulings on the two lawsuits may take months, with oral arguments in the D.C. court scheduled for May 19, leaving the future of Anthropic’s role in Pentagon AI projects—and the broader landscape of AI governance—still uncertain.

Read more: undefined

#business/artificialintelligence #anthropicinc #pentagon #uscourtofappeals #departmentofdefense

undefined | Anthropic loses appeals court bid to temporarily block Pentagon blacklisting

A federal appeals court in Washington, D.C., rejected Anthropic’s request for an emergency stay that would have halted the Department of Defense’s decision to label the AI company a “supply‑chain risk” and effectively blacklist its technology. The court said the balance of equities favored the government, noting that the alleged harm to Anthropic was largely financial, whereas the DOD’s action involved protecting vital AI capabilities during an active military conflict. Consequently, the court denied the motion for a stay while the case proceeds on its merits.

The Pentagon’s designation, issued in early March, obliges defense contractors to certify that they do not use Anthropic’s Claude models in any work for the military. Anthropic argued that the label was an unconstitutional, arbitrary retaliation that threatened its free‑speech rights and could cause irreparable damage. While the judges acknowledged the company could suffer some immediate harm, they concluded that its interests were primarily financial and that “substantial expedition is warranted” for the government’s national‑security concerns.

The legal battle follows a separate San Francisco federal court ruling that granted Anthropic a preliminary injunction blocking the Trump administration’s ban on using the Claude AI system. The company’s efforts to secure a broader injunction against the DOD’s blacklist have thus far been unsuccessful, and the dispute now moves forward in the appellate system. The case highlights the growing tension between emerging AI firms and U.S. defense agencies over control, access, and the ethical use of powerful generative‑AI technologies.

Read more: undefined

#anthropic #pentagon #departmentofdefense #appealscourt #federalcourt

US courts deliver split decisions on AI firm Anthropic's bid to halt Pentagon blacklist designation, with appeals court rejecting injunction while district court previously granted relief, allowing continued work with non-Defense agencies during litigation
#YonhapInfomax #Anthropic #DepartmentOfDefense #SupplyChainRisk #PreliminaryInjunction #AITechnology #Economics #FinancialMarkets #Banking #Securities #Bonds #StockMarket
https://en.infomaxai.com/news/articleView.html?idxno=114589
US Courts Issue Split Rulings on 'Blacklisted' Anthropic's Injunction Request

US courts deliver split decisions on AI firm Anthropic's bid to halt Pentagon blacklist designation, with appeals court rejecting injunction while district court previously granted relief, allowing continued work with non-Defense agencies during litigation

Yonhap Infomax

Conflicting Rulings Leave Anthropic in ‘Supply-Chain Risk’ Limbo

https://fed.brid.gy/r/https://www.wired.com/story/anthropic-appeals-court-ruling/

Defense Agencies Pursue Multi-Cloud Strategies to Bolster Operational Tempo

As defense agencies shift their focus beyond mere cloud migration metrics, they're now prioritizing a new set of goals that will give them a decisive edge in the digital landscape. They're leveraging multi-cloud strategies to drive decision advantage at scale, financial transparency, and operational…

https://osintsights.com/defense-agencies-pursue-multi-cloud-strategies-to-bolster-operational-tempo

#DepartmentOfDefense #HomelandSecurity #MulticloudStrategies #CloudAdoption #OperationalTempo

Defense Agencies Pursue Multi-Cloud Strategies to Bolster Operational Tempo

Defense agencies adopt multi-cloud strategies to boost operational tempo, discover how to leverage cloud for decision advantage at scale now.

OSINTSights
Senator calls for investigation into reports that Hegseth removed more than a dozen women and people of color from military promotion list.
#hegseth
#departmentofdefense
#whitesupremacy https://www.nbcnews.com/politics/national-security/gilibrand-calls-information-reports-hegseth-withholding-promotions-rcna266955
Gillibrand calls for information after reports of Hegseth withholding the promotions

A prominent Democrat senator on the panel which oversees the Pentagon is requesting additional information from the Defense Department following reports that Defense Secretary Pete Hegseth is withholding the promotions of several Black and female senior officers.

NBC News
$1.5 trillion for Department of War (#departmentofdefense . Do you know who was also feverishly rearming, violating international treaties, and enforcing the law of the stronger? Yes, Adolf Hitler. #WhereAreWeHeading ?

Email from fired Army Chief of Staff, Gen. Randy George says that soldiers deserve "courageous leaders of character".
#hegseth
#departmentofdefense

https://www.cbsnews.com/news/army-chief-of-staff-gen-randy-george-going-email-pentagon/

Ousted Army Chief of Staff Gen. Randy George says U.S. soldiers deserve "courageous leaders of character" in outgoing email

Defense Secretary Pete Hegseth asked George to step down and take immediate retirement, CBS News exclusively reported earlier this week.