OpenAI has officially signed a landmark partnership with the U.S. Department of Defense—recently rebranded as the Department of War—just hours after the federal government severed ties with rival firm Anthropic. The deal, announced late Friday by CEO Sam Altman, cements OpenAI’s role in national security operations, integrating its advanced AI models into classified government networks. The move marks a decisive shift in the Pentagon’s artificial intelligence strategy, effectively isolating Anthropic following a heated standoff over ethical guardrails and autonomous weapons protocols.
OpenAI Secures Classified Network Access
In a late-night statement on X (formerly Twitter), Sam Altman confirmed that OpenAI reached an agreement to deploy its models within the Department of War’s classified infrastructure. Altman emphasized that the deal includes specific technical safeguards, asserting that the military agreed to OpenAI’s core ”red lines”: strict prohibitions on domestic mass surveillance and a requirement for human responsibility in the use of force, particularly regarding autonomous weapon systems.
“We think our agreement has more guardrails than any previous agreement for classified AI deployments,” OpenAI stated, positioning the contract as a victory for responsible AI governance. Under the terms, OpenAI will deploy its technology exclusively via cloud networks rather than edge devices, with cleared personnel directly overseeing the integration. This development follows months of speculation about how Silicon Valley’s leading AI labs would navigate the moral complexities of defense contracts.
Anthropic Designated 'Supply Chain Risk'
The agreement with OpenAI stands in stark contrast to the Pentagon’s abrupt break with Anthropic. On Friday, Defense Secretary Pete Hegseth officially designated Anthropic a “supply chain risk,” a severe classification typically reserved for foreign adversaries. This designation effectively bars military contractors and suppliers from conducting commercial activity with the San Francisco-based startup, jeopardizing its standing in the federal marketplace.
The rift deepened after Anthropic refused to waive its “Responsible Scaling Policy” restrictions, which prevent its Claude models from being used for unrestricted military applications. The startup’s leadership stood firm on its refusal to enable mass domestic surveillance or fully autonomous lethal targeting. In response, President Trump issued a directive ordering all federal agencies to cease using Anthropic’s technology immediately, with a six-month phase-out period for existing operations.
The Controversy Over 'Red Lines'
The divergence in outcomes for the two companies has sparked intense debate within the tech industry. While Anthropic was blacklisted for its insistence on safety protocols, Sam Altman claims OpenAI secured the very same ethical concessions from the Pentagon. “The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement,” Altman wrote, expressing hope that the government would eventually offer the same terms to other labs.
However, the administration’s rhetoric suggests a more punitive stance toward Anthropic. President Trump criticized the company’s leadership on social media, labeling them “leftwing nut jobs” for attempting to impose their Terms of Service on the U.S. military. Defense officials argued that Anthropic’s refusal to grant “all lawful purposes” access amounted to an attempt to seize veto power over national security decisions.
A New Era for AI Defense Contracts
This shakeup represents a significant realignment in the military-industrial complex. OpenAI’s entry into the classified domain suggests the Pentagon is prioritizing partners who can navigate the political landscape while delivering operational capabilities. For Anthropic, the loss of a potential $200 million contract and the imposition of the supply chain ban poses a serious threat to its public sector business.
As the Department of War accelerates its adoption of generative AI for intelligence analysis, cyber defense, and operational planning, the industry is watching closely. The split signals that while the U.S. military is eager to adopt cutting-edge technology, it demands compliance with its command structure above corporate ethical governance. The long-term impact on AI safety standards in defense remains to be seen as OpenAI steps into the void left by its rival.