Saturday, April 4, 2026

“Judge Blocks Pentagon Blacklist in Anthropic Legal Battle”

Share

A recent development in the legal battle between Anthropic and the U.S. military saw a judge issuing a temporary block on the Pentagon’s decision to blacklist Anthropic. The company had been designated a national security supply-chain risk by U.S. Secretary of War Pete Hegseth, a move that Anthropic challenged in a California federal court. The lawsuit claimed that the government’s labeling violated the company’s free speech rights under the First Amendment and denied it due process under the Fifth Amendment.

U.S. District Judge Rita Lin, appointed by former President Joe Biden, sided with Anthropic in a detailed 43-page ruling. However, the ruling’s enforcement was delayed for seven days to allow the administration time to appeal. The dispute arose from Anthropic’s opposition to the military’s use of its AI chatbot Claude for surveillance or autonomous weapons, which resulted in the company being blocked from certain military contracts.

Anthropic argued that AI models are not sufficiently reliable for use in autonomous weapons and objected to domestic surveillance activities. The Pentagon contended that private companies should not impede military operations but clarified that it only intended to use the technology in lawful ways. Judge Lin criticized the government’s actions, suggesting they were more punitive towards Anthropic for its public criticism than driven by national security concerns.

In response to the ruling, an Anthropic spokesperson, Danielle Cohen, expressed satisfaction with the outcome and emphasized the company’s commitment to collaborating with the government for the benefit of all Americans. Anthropic’s designation as a supply-chain risk marked a significant precedent in U.S. government procurement regulations, with the company challenging the decision as unsupported and inconsistent.

The Justice Department maintained that Anthropic’s stance could create uncertainty within the Pentagon regarding the use of Claude and potentially disrupt military systems. The government asserted that the designation was a result of Anthropic’s reluctance to agree to contractual terms rather than its AI safety stance. Anthropic also faces a separate legal challenge in Washington over another Pentagon supply-chain risk designation that could impact its eligibility for civilian government contracts.

Read more

Local News