Anthropic submitted a comprehensive 96-page filing to the U.S. Court of Appeals for the District of Columbia Circuit on Wednesday, April 22, 2026. The San Francisco-based artificial intelligence firm is seeking to refute formal assertions by the U.S. Department of Defense that its proprietary Claude AI models pose a significant supply chain risk to national security. Anthropic argues that once its software is integrated into air-gapped, classified Pentagon networks, the company retains no technical capacity to manipulate the system, modify its outputs, or access the data processed within those secure environments.
The legal conflict originated from a dispute over the terms of a defense contract involving the implementation of AI within fully autonomous lethal weapons and potential surveillance frameworks. The Trump administration recently applied a supply chain risk designation to Anthropic, a move the company characterizes as illegal retaliation. This designation is typically utilized under federal law to protect national security systems from sabotage by foreign adversaries. Anthropic contends the Pentagon is using this status to stigmatize the company after it refused to waive its internal safety protocols regarding the use of its technology in lethal autonomous weapon systems (LAWS).
In its latest filing, Anthropic’s legal counsel addressed specific technical inquiries previously raised by the court. The company detailed its deployment architecture, asserting that the model weights and inference engines are physically and digitally isolated from its corporate infrastructure once delivered to the military. This isolation, according to the brief, renders the Pentagon’s concerns about remote interference or backdoor access technically impossible. The filing emphasizes that the Department of Defense maintains total local control over the instances of Claude running on its classified servers.
The case underscores a deepening friction between the U.S. defense establishment and Silicon Valley’s leading AI developers. The administration’s aggressive stance is part of a broader geopolitical strategy to secure the domestic technology supply chain against any perceived vulnerabilities. However, the dispute also highlights the ethical debate surrounding the militarization of AI. While the U.S. government views the rapid adoption of large language models as a strategic necessity to maintain a competitive edge over China, companies like Anthropic have sought to maintain boundaries on how their models are utilized in combat scenarios.
This latest legal maneuver follows a ruling by the appeals court earlier this month which rejected Anthropic’s request for an emergency order to block the Pentagon’s actions. The court is currently in the process of collecting evidence ahead of oral arguments, which are officially scheduled for May 19, 2026. The Department of Justice, representing the executive branch, is expected to submit its formal response to Anthropic’s claims in the coming weeks. The final decision by the D.C. Circuit could establish a significant legal precedent for the government’s authority to regulate domestic technology firms under national security mandates.