Home » Microsoft Breaks With Pentagon in Landmark Legal Filing That Supports Anthropic’s AI Safety Stance

Microsoft Breaks With Pentagon in Landmark Legal Filing That Supports Anthropic’s AI Safety Stance

by admin477351

In a landmark legal filing that marks a public break with the Pentagon’s position, Microsoft has submitted a court brief in support of Anthropic’s challenge to the Defense Department’s supply-chain risk designation. The brief was filed in a San Francisco federal court and called for a temporary restraining order against the designation. The filing was joined by Amazon, Google, Apple, and OpenAI, making this one of the most broadly supported legal interventions against a government action in recent technology industry history.
Anthropic triggered the conflict when it refused to allow its Claude AI to be used for mass surveillance of US citizens or to power autonomous lethal weapons during negotiations over a $200 million military contract. Defense Secretary Pete Hegseth responded by labeling the company a supply-chain risk, a designation that triggered the cancellation of Anthropic’s government contracts. Anthropic filed two simultaneous lawsuits in California and Washington DC, challenging the designation as unconstitutional and unprecedented.
Microsoft’s break with the Pentagon’s position is notable given the company’s status as one of the military’s most trusted and deeply embedded technology partners. The company holds a share of the Pentagon’s $9 billion cloud computing contract and has additional federal agreements covering defense, intelligence, and civilian agencies. Microsoft publicly argued that the government and industry must work together to ensure advanced AI serves national security responsibly, without crossing ethical lines related to surveillance or autonomous warfare.
Anthropic’s court filings argued that the supply-chain risk designation was applied as ideological punishment for the company’s public advocacy of responsible AI development, violating its First Amendment rights. The company stated that it does not currently believe Claude is safe or reliable enough for lethal autonomous operations, which it said was the genuine basis for its contract demands. The Pentagon’s technology chief publicly ruled out any possibility of renewed negotiations.
Congressional Democrats are separately demanding answers from the Pentagon about whether AI was involved in a strike in Iran that reportedly killed over 175 civilians at a school. Their formal inquiries focus on AI targeting systems, human oversight, and whether specific AI tools were used to identify the target. The combination of Microsoft’s public break with the Pentagon, the industry coalition’s legal intervention, and congressional pressure is creating an extraordinary moment of accountability for US military AI policy.

You may also like