What is the story about?
WASHINGTON (AP) — Anthropic CEO Dario Amodei stated on Thursday that the artificial intelligence company cannot ethically comply with the Pentagon's requests
for broader usage of its technology. The company clarified in a statement that it remains engaged in negotiations but has found the latest contract proposals from the Defense Department to offer little progress in preventing the use of its AI model, Claude, for mass surveillance of Americans or fully autonomous weapon systems.
Pentagon's Position
The Pentagon's chief spokesman reaffirmed that the military intends to utilize Anthropic's AI technology in lawful manners and will not permit the company to set limitations ahead of a Friday deadline for compliance with its demands.Sean Parnell, the spokesman, asserted on social media that the Pentagon has no intention of using AI for mass surveillance of Americans, which is illegal, nor does it aim to develop autonomous weapons that operate without human oversight.
Anthropic's Technology Policies
Anthropic has established policies that prohibit its models, including the chatbot Claude, from being employed for surveillance or autonomous weaponry. The company stands as the last among its peers, which include Google, OpenAI, and Elon Musk's xAI, to refrain from supplying its technology to a new internal military network.Consequences of Non-compliance
During a meeting on Tuesday between Defense Secretary Pete Hegseth and Amodei, military officials indicated potential consequences for Anthropic, including being designated as a supply chain risk, contract termination, or the invocation of the Defense Production Act to grant the military broader authority to use its products without company approval.Parnell referenced two of these possible outcomes in a Thursday post on X, stating that Anthropic has until 5:01 PM ET on Friday to make a decision. Failure to comply would result in the termination of their partnership and classification as a supply chain risk.
Reactions from Lawmakers
Anthropic did not respond immediately to a request for comment on Thursday. Following Tuesday’s meeting, the company expressed its commitment to continuing good-faith discussions regarding its usage policy to align with the government's national security mission.Senator Thom Tillis, a Republican from North Carolina who is not seeking reelection, criticized the Pentagon's handling of the situation, suggesting that it was unprofessional and that Anthropic was attempting to assist the government in navigating the complexities of AI technology.
Tillis questioned the public nature of the discussions, emphasizing that it was not an appropriate method for dealing with a strategic vendor.
Senator Mark Warner, a Virginia Democrat and the ranking member of the Senate Intelligence Committee, expressed his concerns about the Pentagon's approach, stating he was disturbed by reports of efforts to intimidate a leading U.S. company.
Warner highlighted the necessity for Congress to implement strong, binding AI governance frameworks in national security contexts.














