By Deepa Seetharaman, David Jeans and Jeffrey Dastin
WASHINGTON/SAN FRANCISCO, Jan 29 (Reuters) - The Pentagon and artificial-intelligence developer Anthropic are at odds over potentially eliminating safeguards
that might allow the government to use its technology to target weapons autonomously and conduct U.S. domestic surveillance, three people familiar with the matter told Reuters.
The discussions represent an early test case for whether Silicon Valley – in Washington’s good graces after years of tensions – can sway how U.S. military and intelligence personnel deploy increasingly powerful AI on the battlefield.
After weeks of contract talks, the U.S. Department of Defense and Anthropic are at a standstill, six people familiar with the matter said, on condition of anonymity. The company's position on how its AI tools can be used has intensified disagreements between it and the Trump administration, details of which have not been previously reported.
In line with a January 9 Defense Department memo on its AI strategy, Pentagon officials have argued that they should be able to deploy commercial AI technology regardless of companies' usage policies, so long as they comply with U.S. law, the people said.
A spokesperson for the department, which the Trump administration renamed the Department of War, did not immediately respond to requests for comment.
In a statement, Anthropic said its AI is "extensively used for national security missions by the U.S. government and we are in productive discussions with the Department of War about ways to continue that work."
(Reporting By Deepa Seetharaman and Jeffrey Dastin in San Francisco and David Jeans in Washington, Editing by Kenneth Li and Franklin Paul)








