In a significant development within the artificial intelligence sector, OpenAI has finalized an agreement with the U.S. Department of Defense (DoD) to deploy its AI models on classified networks. This accord was announced shortly after President Donald Trump directed federal agencies to cease using technology from rival AI firm Anthropic, and Defense Secretary Pete Hegseth designated Anthropic as a supply-chain risk to national security. The events underscore escalating tensions between AI developers and the U.S. government over ethical safeguards and national security applications.
The dispute originated from negotiations surrounding Anthropic’s AI model, Claude, under a contract potentially valued at up to $200 million. Anthropic insisted on restrictions prohibiting the use of its technology for mass domestic surveillance or fully autonomous weapons systems without human oversight. The Pentagon, however, demanded unfettered access for all lawful purposes, viewing these limitations as an infringement on military operational autonomy. Following months of discussions, including a meeting between Anthropic CEO Dario Amodei and Secretary Hegseth, the talks collapsed when Anthropic refused to yield.
On February 27, 2026, President Trump issued a directive via Truth Social, ordering every federal agency to “immediately cease” all use of Anthropic’s technology, with a six-month phase-out period to facilitate transitions. He emphasized that the government “don’t need it, we don’t want it, and will not do business with them again,” framing Anthropic’s stance as a “disastrous mistake” in attempting to impose terms over constitutional authority. Concurrently, Secretary Hegseth announced on X that Anthropic would be labeled a supply-chain risk, a designation typically reserved for entities posing threats akin to foreign adversaries. This prohibits any military contractors or partners from engaging in commercial activities with Anthropic, effectively barring the company from Pentagon-related work. Hegseth described Anthropic’s position as “arrogance and betrayal,” asserting that U.S. law, not corporate policies, governs defense decisions.
Anthropic responded by expressing deep sadness over the decision and vowing to challenge the supply-chain risk designation in court, deeming it “legally unsound” and a “dangerous precedent.” The company reiterated its unwavering commitment to prohibitions on mass domestic surveillance and fully autonomous weapons, stating that “no amount of intimidation or punishment” would alter its principles. Anthropic also offered assistance in transitioning services during the phase-out, anticipating minimal impact on non-military customers.
Hours after these announcements, OpenAI CEO Sam Altman revealed on X that his company had reached an agreement with the DoD—rebranded as the “Department of War” under the Trump administration—to integrate its models into classified systems. Altman highlighted the DoD’s “deep respect for safety” and alignment with OpenAI’s core principles, including bans on domestic mass surveillance and requirements for human responsibility in the use of force, such as in autonomous weapon systems. The deal incorporates technical safeguards to ensure model compliance and involves deploying OpenAI personnel for oversight on cloud networks. Altman urged the DoD to extend similar terms to all AI firms, expressing a preference for de-escalation through “reasonable agreements” rather than legal confrontations.
This sequence of events marks a pivotal moment in AI-government relations, with implications for industry standards on ethical AI deployment. Other AI providers, including Google and xAI, hold similar DoD contracts and have agreed to broader lawful uses. Critics, including former officials, have labeled the actions against Anthropic as potentially the most stringent domestic AI regulation to date, raising concerns about precedents for private-sector autonomy in national security contexts. The developments also reflect broader debates on AI’s role in warfare, amplified by recent conflicts demonstrating automated systems’ capabilities.
