San Francisco-based Anthropic is challenging a decision by the department and other federal agencies to shift their AI work to other providers, based on a
Anthropic wants a judge to remove the supply-chain risk designation and require US agencies to withdraw directives related to it. The company claims it is being shut out for disagreeing with the administration and argues the legal principles at stake affect every federal contractor whose views the government dislikes.
“These actions are unprecedented and unlawful,” Anthropic said in a complaint filed Monday in San Francisco federal court, adding that the company’s business is being threatened. “The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech.”
Read More:
Last week, the Pentagon
According to the complaint, the government’s actions “are harming Anthropic irreparably,” putting the company’s contracts with private firms “in doubt” and potentially “jeopardizing hundreds of millions of dollars in the near-term.”
There are likely to be “enormous” consequences for others, including on those “whose speech will be chilled; on those benefiting from the economic value the company can continue to create; and on a global public that deserves robust dialogue and debate on what AI means for warfare and surveillance,” Anthropic said.
The dispute erupted last month, after the Pentagon wanted to use Claude for any purpose within legal limits — and without any usage restrictions from Anthropic. The firm had insisted that the chatbot not be used for mass surveillance against Americans or in fully autonomous weapons operations.
WATCH: Anthropic PBC is suing the US department of Defense after it said the company is a supply chain risk. Seth Fiegerman reports. Source: Bloomberg
In response, Defense Secretary
Trump
The company’s lawsuit names as defendants the Department of War — which the Trump administration uses to describe the Department of Defense — as well as more than a dozen other federal agencies.
The White House defended the administration’s actions.
“President Trump will never allow a radical left, woke company to jeopardize our national security by dictating how the greatest and most powerful military in the world operates,” spokesperson Liz Huston said. Trump and Hegseth “are ensuring America’s courageous warfighters have the appropriate tools they need to be successful and will guarantee that they are never held hostage by the ideological whims of any Big Tech leaders.”
The Department of Defense didn’t respond to a request for comment on the lawsuit.
Anthropic said in the complaint that it imposed “usage restrictions” based on the company’s “unique understanding of Claude’s risks and limitations — including Claude’s capacity to make mistakes and its unprecedented ability to accelerate and automate analysis of massive amounts of data, including data about American citizens.”
As part of its challenge to the US government, Anthropic also filed a complaint in an appellate court in Washington, DC, focusing on a law governing procedures for mitigating supply-chain risks in procurement. In that suit, the company claimed the Defense Department exceeded its authority with actions that were “arbitrary, capricious and an abuse of discretion.”
AI Technology
In the days after the department first announced its risk designation, consumers drove “
Meanwhile, rival
Founded in 2021 by former OpenAI employees, Anthropic quickly cemented itself as a rival to the ChatGPT maker with Claude, which it billed as more safety- and business-focused. Today, the San Francisco-based company has more than 300,000 business customers who use its models to streamline workplace responsibilities, particularly in the field of computer programming where it has emerged as a market leader with its AI coding assistant, Claude Code.
Anthropic started the year on a winning streak, with surging sales, multiple viral products and a large funding round — all giving the startup a big advantage in the costly global AI race.
But its future is uncertain since its relationship with the Pentagon imploded in late February — just before the US attacked Iran in a major Middle East military operation.
Some legal and policy experts warned that the fallout from the government’s declaration would be dire.
“The designation and attempts to blacklist the company from other aspects of the government go far beyond the scope of what would be considered least restrictive means even if there are security concerns about the further use of the product,” Huddleston said in a statement.
The case is Anthropic v. US Department of War, 26-cv-01996, US District Court, Northern District of California (San Francisco).
(Updates with White House comment, additional filing by Anthropic in DC federal appeals court.)
--With assistance from
To contact the reporters on this story:
To contact the editors responsible for this story:
Steve Stroth
© 2026 Bloomberg L.P. All rights reserved. Used with permission.
