Anthropic Sues Pentagon After Being Branded a National Security ‘Supply Chain Risk’
Artificial intelligence developer challenges U.S. defense decision that blocks military contractors from using its technology.
Artificial intelligence developer Anthropic has filed a lawsuit against the United States government after the Pentagon labeled the company a national security “supply chain risk,” a designation that effectively blocks defense agencies and contractors from using its AI systems.
The legal action follows a growing dispute between the Department of Defense and the San-Francisco-based company over how its technology should be used in military operations.
Federal officials formally designated Anthropic and its products a supply-chain risk earlier this month, ordering the military and its contractors to halt the use of the company’s artificial intelligence tools in defense work.
The designation carries significant consequences.
Companies working with the Pentagon could face restrictions if they continue to rely on Anthropic’s technology, potentially cutting the firm off from major federal contracts and partnerships within the defense industry.
Anthropic argues in its court filing that the decision is unlawful and represents retaliation for the company’s policies limiting how its AI systems may be deployed.
The company’s leadership has drawn clear boundaries around its models, stating that they should not be used for mass domestic surveillance of Americans or for fully autonomous weapons that could operate without human oversight.
According to the lawsuit, the government’s action violates constitutional protections and exceeds the authority granted under federal supply-chain security laws.
The company is asking a federal judge to block the designation and prevent agencies from enforcing it while the legal challenge proceeds.
The clash follows months of tense negotiations between the Pentagon and Anthropic over the military’s access to the company’s flagship AI model, Claude.
Defense officials have argued that private technology firms cannot impose restrictions that limit how the U.S. military uses critical systems during national security operations.
Historically, supply-chain-risk designations have been used primarily against foreign companies suspected of posing espionage or sabotage threats.
Applying the label to a U.S. technology developer is widely viewed as an unusual step that underscores the intensity of the dispute over artificial intelligence and military authority.
Anthropic has previously worked with U.S. government agencies and defense contractors, including partnerships to provide AI tools capable of analyzing large datasets and supporting national security decision-making.
The new restrictions place those collaborations in jeopardy and could affect hundreds of millions of dollars in existing or future government work.
The legal battle highlights a broader debate unfolding across Washington and Silicon Valley about the role of artificial intelligence in warfare and surveillance.
As governments race to deploy increasingly powerful AI systems, tensions are emerging over how much control technology companies should retain over the uses of their own creations.