What is the story about?
The US Department of Defense is accelerating efforts to build and deploy alternative AI systems after cutting ties with Anthropic, marking a significant shift in how the military approaches artificial intelligence partnerships.
Following a breakdown in negotiations over safety and usage restrictions, the Pentagon is now working to integrate multiple large language models into government-controlled environments. Officials say engineering work is already underway, with new systems expected to be operational in the near future.
The move comes amid a wider shake-up across federal agencies, many of which are scrambling to adjust after an abrupt directive to halt the use of Anthropic’s technology.
Contract collapse triggers rapid AI pivot
The fallout stems from the collapse of a $200 million agreement between Anthropic and the Department of Defense. Talks reportedly broke down after disagreements over how the military could use the company’s AI models.
Anthropic had pushed for safeguards that would prevent its technology from being used for mass surveillance of US citizens or for fully autonomous weapons systems. The Pentagon, however, did not agree to those conditions, leading to an impasse.
In the aftermath, the Defence Department moved quickly to find alternatives. OpenAI has since secured a deal with the Pentagon, while xAI has also entered the fold, with its Grok model set to be used in classified environments.
Defence Secretary Pete Hegseth has gone a step further by designating Anthropic as a “supply-chain risk”, a label typically reserved for foreign adversaries. The classification effectively blocks Pentagon contractors from working with the company, escalating tensions further. Anthropic is now challenging the decision in court.
Agencies left scrambling amid unclear directives
The ripple effects of the decision are being felt across US government agencies, many of which had already integrated Anthropic’s tools into their workflows.
In some cases, the transition has been abrupt and disruptive. At agencies such as the General Services Administration and the Department of Health and Human Services, Anthropic’s Claude system was removed within hours of a directive issued by Donald Trump to cease its use.
However, the lack of formal guidance has created confusion elsewhere. Weeks after the order, several agencies are still reviewing their use of Anthropic’s tools, with some systems reportedly remaining accessible.
Officials describe a chaotic transition, with employees given little time to prepare. In one instance, thousands of users reportedly had only hours to save their work before losing access. Projects, chat histories, and coding efforts were wiped, leading to frustration among staff.
The episode has exposed the challenges of rapidly removing a major AI vendor from federal systems, particularly after a concerted push to embed such technologies in recent years.
It also highlights a broader shift in strategy. Rather than relying heavily on a single provider, the Pentagon now appears to be moving towards a more diversified and controlled AI ecosystem, incorporating multiple vendors and developing in-house capabilities.
As the legal battle with Anthropic unfolds, the US government’s AI roadmap is being rewritten in real time, with implications not just for defence, but for how public sector institutions adopt and regulate emerging technologies.














