When the Trump administration designated Anthropic a “supply-chain risk” and ordered every federal agency to stop using Claude, it didn’t just cancel a $200 million contract. It may have set in motion a chain of events that weakens America’s most advanced AI company — at the exact moment the U.S. needs it most.
Anthropic has now filed two lawsuits against the Department of Defense. What happens next could matter far more than either side is letting on.
What Actually Happened
Supposedly, Anthropic refused to give the Pentagon unrestricted access to Claude, its frontier AI model, the only one currently running on classified military networks. They wanted guarantees that there would be zero mass surveillance and no autonomous weapons without a human in the loop, making the final decisions of life or death. The Department of War’s message was “remove those restrictions or lose everything.” And President Trump ordered every federal agency to stop using Anthropic and designated the company a “supply-chain risk.”
But, there is far more to this story than lawsuits and bruised egos.
The Real Threat Isn’t the Contract
Federal law already prohibits mass surveillance of US citizens. The DoW policy already restricts autonomous weapons. Anthropic is demanding contractual veto power over activities that are already illegal. A private company claiming authority over how the United States military operates is not acceptable. No one elected Dario Amodei and we don’t let Lockheed dictate targeting doctrine. The notion that a software company should hold veto power over operational military decisions has no precedent.
Claude outperforms ChatGPT on virtually every enterprise benchmark that matters from legal reasoning and financial modeling to cybersecurity and legacy systems modernization. But, a “supply chain risk” designation by the Department of War threatens to end Anthropic’s commercial momentum before it can fully capitalize on its technological lead.
The Geopolitical Stakes
Anthropic signed their $200 million contract with the Pentagon in July 2025. That’s eight months ago. Now it’s done and OpenAI is swooping in and filling that void. To say this happened fast is understating it.
Additionally, Anthropic and OpenAI have both publicly accused Chinese labs of distilling their models. Those stolen, open-source versions including Deepseek are now available to the PLA, to Iran, to every bad actor on the planet with zero guardrails. Do we want to exist in a world where American companies restrict their own military while adversaries train on pirated versions of that same technology with no restrictions whatsoever?
The real existential threat is not the $200 million contract loss, but the ripple effect that will rush through AWS, Google, Palantir, Accenture, Deloitte, and the entire defense contractor ecosystem reaching deep into Anthropic’s commercial customer base in the US.
The corporate world has shown that they will do whatever it takes to keep the current administration happy with them. Every company that does business with the federal government now potentially has to certify zero exposure to Anthropic products. AWS, Google Cloud, Azure all serve the government, and Anthropic says the largest U.S. companies use Claude, and many are defense contractors. If this comes to be, Anthropic may not be viable in the United States for much longer.
Can Anthropic Win in Court?
My point of view is that legally, the designation won’t survive. There’s 10 U.S.C. § 3252 limitations, due process and First Amendment arguments, and the Luokung and Xiaomi precedents. Then, there is the inherent contradiction that the government says that Anthropic is dangerous, but they’re allowing six months to phase it out.
All of that combined and there is a playbook for Anthropic to win these two suits. They have billions, which means they can afford the best legal team money can buy. They have the ammunition and the will to battle this administration as long as it takes.
What Anthropic Must Do Now
Winning in court is necessary but not sufficient. To stay viable, Anthropic needs to move on multiple fronts simultaneously:
- Accelerate domestic commercial dominance with companies not tied to government contracts
- Build an allied-government strategy — identify which international partners can benefit from Claude and build that customer base immediately
- Litigate aggressively and endlessly — delay is the enemy
- Deepen ecosystem dependencies by leading the governance coalition for values-driven, responsible AI — the more public goodwill and industry trust Anthropic builds, the stronger its long-term position
The core question isn’t really about lawsuits or contract dollars. It’s about who decides the boundaries of national defense — elected officials accountable to voters, or tech executives accountable to their boards. Vinod Khosla put it plainly: he admires Anthropic’s principles, but disagrees with the principle itself.
The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.