The Trump administration is considering a new plan that would require the Pentagon to carry out safety testing of artificial intelligence models before they are deployed across federal, state and local government systems, according to a report by Axios on Tuesday.
The proposal marks a shift in approach, as the administration appears to be softening its earlier hardline stance on AI safety and security measures. It would place the Pentagon at the centre of evaluating potential risks in AI systems before they are rolled out for public sector use.
According to Axios, the White House’s Office of the National Cyber Director held two meetings last week—one with technology and cybersecurity firms, and another with industry trade groups—to discuss growing
concerns over advanced AI systems. These included tools such as Anthropic’s Mythos Preview, which has raised questions about cybersecurity risks.
Trump Administration Targets ‘AI Model Theft’, Eyes Crackdown On China
Sources cited in the report said the office has also been discussing a broader AI security framework that would formally assign the Pentagon responsibility for testing AI models used by government agencies.
The discussions come as the administration reassesses its earlier position on AI governance. The shift follows heightened attention on frontier AI systems and their potential security implications, particularly after the emergence of powerful new models.
A White House official, however, said no final policy decisions have been made. “Any policy announcement will come directly from the president,” the official said, adding that discussions about possible executive orders remain speculative.
The issue has gained urgency since Anthropic introduced Mythos, an advanced AI model described by experts as having strong capabilities in identifying and potentially exploiting cybersecurity weaknesses. The White House has reportedly been reviewing the model’s implications for national security.
Earlier this year, the Trump administration had directed government agencies to stop working with Anthropic, with the Pentagon classifying the company as a supply-chain risk. That decision followed disagreements over safeguards for military use of AI tools.
Since then, Anthropic has disputed the designation and filed a lawsuit against the Department of Defense. The company has also said its most advanced model, Mythos Preview, will not be publicly released and will instead be tested privately with select partners under its “Project Glasswing” initiative.
(With inputs from agencies)

/images/ppid_a911dc6a-image-177795806673426691.webp)
/images/ppid_59c68470-image-177814506040833406.webp)

/images/ppid_59c68470-image-177808003055415804.webp)
/images/ppid_a911dc6a-image-177812253628510809.webp)
/images/ppid_59c68470-image-177806006241298409.webp)





/images/ppid_59c68470-image-177811752685316901.webp)