What is the story about?
The National Security Agency (NSA) is reportedly using a powerful artificial intelligence model developed by Anthropic, despite internal resistance from the Department of Defense (DoD), according to a report by Axios.
The development underscores a growing divide within the US government over the role of advanced AI tools in national security operations.
At the centre of the debate is Mythos Preview, Anthropic’s most advanced model to date, designed with strong cybersecurity capabilities. While defence officials have previously raised concerns about the company, intelligence agencies appear to be moving ahead with adoption, prioritising operational needs.
The DoD had earlier moved to restrict the use of Anthropic’s tools , reportedly directing vendors in February to limit engagement with the company over what it described as a potential “
supply chain risk”. The dispute is ongoing, with legal and policy questions yet to be resolved.
Despite this, sources cited by Axios indicate that the NSA has continued to deploy Mythos, with at least some usage extending across other parts of the department. The apparent contradiction highlights a broader tension, with the military simultaneously questioning Anthropic’s reliability in court while elements within its ecosystem rely on its technology.
The disagreement traces back to contract renegotiations earlier this year. Defence officials had pushed for broader access to Anthropic’s models for “all lawful purposes”. However, the company reportedly resisted certain applications, particularly around mass domestic surveillance and autonomous weapons, drawing a firm line on how its AI could be used.
Some within the Pentagon view this stance as a liability, arguing that it raises doubts about whether Anthropic can fully support defence requirements when needed. The company, however, has rejected such claims.
Anthropic has tightly controlled access to Mythos, limiting availability to roughly 40 organisations due to concerns over its potential misuse, particularly in offensive cyber operations. Only a handful of these partners have been publicly identified, though sources suggest the NSA is among those granted access.
While specific use cases within the NSA remain unclear, other organisations are believed to be using the model primarily to identify vulnerabilities in their own systems, scanning for exploitable weaknesses and strengthening cyber defences.
The model’s reach may not be limited to the US. Counterparts in the United Kingdom have indicated that they can access similar capabilities through national AI security initiatives, reflecting a wider global interest in such tools.
Recent high-level discussions also signal continued engagement.
Anthropic CEO Dario Amodei reportedly met senior US officials, including White House Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent, to discuss Mythos deployment and broader AI security considerations. According to sources, the talks were productive and may shape how agencies beyond the Pentagon engage with the technology.
For now, the situation reflects a familiar pattern in emerging technologies. Even as concerns around trust, control and policy persist, the urgency of real-world applications, particularly in cybersecurity, is pushing adoption forward.
The development underscores a growing divide within the US government over the role of advanced AI tools in national security operations.
At the centre of the debate is Mythos Preview, Anthropic’s most advanced model to date, designed with strong cybersecurity capabilities. While defence officials have previously raised concerns about the company, intelligence agencies appear to be moving ahead with adoption, prioritising operational needs.
Pentagon concerns VS intelligence priorities
The DoD had earlier moved to restrict the use of Anthropic’s tools , reportedly directing vendors in February to limit engagement with the company over what it described as a potential “
Despite this, sources cited by Axios indicate that the NSA has continued to deploy Mythos, with at least some usage extending across other parts of the department. The apparent contradiction highlights a broader tension, with the military simultaneously questioning Anthropic’s reliability in court while elements within its ecosystem rely on its technology.
The disagreement traces back to contract renegotiations earlier this year. Defence officials had pushed for broader access to Anthropic’s models for “all lawful purposes”. However, the company reportedly resisted certain applications, particularly around mass domestic surveillance and autonomous weapons, drawing a firm line on how its AI could be used.
Some within the Pentagon view this stance as a liability, arguing that it raises doubts about whether Anthropic can fully support defence requirements when needed. The company, however, has rejected such claims.
Mythos gains traction despite restrictions
Anthropic has tightly controlled access to Mythos, limiting availability to roughly 40 organisations due to concerns over its potential misuse, particularly in offensive cyber operations. Only a handful of these partners have been publicly identified, though sources suggest the NSA is among those granted access.
While specific use cases within the NSA remain unclear, other organisations are believed to be using the model primarily to identify vulnerabilities in their own systems, scanning for exploitable weaknesses and strengthening cyber defences.
The model’s reach may not be limited to the US. Counterparts in the United Kingdom have indicated that they can access similar capabilities through national AI security initiatives, reflecting a wider global interest in such tools.
Recent high-level discussions also signal continued engagement.
For now, the situation reflects a familiar pattern in emerging technologies. Even as concerns around trust, control and policy persist, the urgency of real-world applications, particularly in cybersecurity, is pushing adoption forward.















