Home » Microsoft Stands With Anthropic as AI Company Battles Pentagon’s Controversial Blacklist

Microsoft Stands With Anthropic as AI Company Battles Pentagon’s Controversial Blacklist

by admin477351

Microsoft has filed a formal court brief in a federal court in San Francisco, throwing its considerable weight behind Anthropic’s legal challenge against the Pentagon. The tech giant argued that a temporary restraining order was urgently needed to prevent serious harm to businesses and government systems that depend on Anthropic’s AI tools. Google, Amazon, Apple, and OpenAI also signed on to a separate supporting brief, making this one of the most unified displays of tech industry solidarity in recent memory.

The dispute has its roots in collapsed contract negotiations between Anthropic and the Department of Defense over a $200 million deal to deploy AI on classified military systems. Talks broke down after Anthropic insisted its technology must not be used for mass surveillance of American citizens or to power autonomous lethal weapons. Defense Secretary Pete Hegseth responded by labeling Anthropic a supply-chain risk, a designation previously reserved for companies with ties to foreign adversaries like China.

Microsoft, one of the Pentagon’s most deeply embedded technology partners, holds a share of the military’s $9 billion Joint Warfighting Cloud Capability contract alongside Amazon, Google, and Oracle. The company also struck additional multibillion-dollar deals under the Trump administration to support federal cloud and AI advancement. In a public statement, Microsoft said the Department of War needed reliable access to the country’s best technology while also ensuring AI is not used for domestic surveillance or unauthorized warfare.

Anthropic filed two separate lawsuits on the same day, one in a California federal court and one in the DC circuit court of appeals. The company argued that the Pentagon had weaponized the supply-chain risk label as ideological punishment for its publicly stated AI safety positions. Anthropic also stated in its complaint that it currently lacked confidence that its AI model Claude could function reliably or safely in lethal autonomous warfare scenarios.

The broader implications of this case extend far beyond Anthropic’s contracts. If the Pentagon’s designation stands, it could chill the willingness of other AI companies to set ethical limits on how their technology is used by the military. The outcome may define the boundaries of the relationship between the US government and the private AI industry for years to come.

You may also like