What is the story about?
The United States-based artificial intelligence (AI) firm Anthropic has posted a vacancy on LinkedIn to hire a Policy Manager dealing with chemical weapons and high yield explosives. The company says that the role is to shape how AI systems handle sensitive chemical and explosives information.
In the recruitment post, the company says that the role of the hired person will be to design and implement evaluation methodologies for 'assessing AI model capabilities' related to chemical weapons, explosives synthesis, and energetic materials.
The firm says that the applicants should have a minimum 5 years of experience in chemical weapons or explosives defense, with 'deep expertise in energetic materials, chemical weapon agents, or related areas.'
Anthropic is not the only firm to hire someone for such a role. Earlier, OpenAI had posted a similar kind of
vacancy
. The San Francisco-based company planned to recruit a researcher, related to frontier biological and chemical risks, in its preparedness team.
Acknowledging that while the frontier AI models have the potential to benefit humanity, they also pose severe risks, the company said, "To ensure that AI promotes positive change, the Preparedness team helps us prepare for the development of increasingly capable frontier AI models." This team by OpenAI is constituted to identify, track, and prepare for catastrophic risks related to frontier AI models.
However, experts warn that there lies an inherent risk of this approach. It could give AI tools information about weapons - even if they have been instructed not to use it.
The AI industry has recently come under heavy scrutiny about the potential existential threats posed by the technology. Recently, the US government has called on AI firms while launching its war in Iran and also military operations in Venezuela.
Previously, Anthropic has decided to take legal action against the US government, which designated the company a supply chain risk. The firm insisted that its systems must not be used by the government in either fully autonomous weapons or mass surveillance.
Anthropic co-founder Dario Amodei had said in February that the technology is not good enough yet, and should not be used for these (war) purposes.
However, a report by BBC says that Anthropic's AI assistant, Claude, has not yet been phased out, and is still embedded in systems provided by Palantir and is also being deployed by the US in the US-Israel Iran war.
In the recruitment post, the company says that the role of the hired person will be to design and implement evaluation methodologies for 'assessing AI model capabilities' related to chemical weapons, explosives synthesis, and energetic materials.
The firm says that the applicants should have a minimum 5 years of experience in chemical weapons or explosives defense, with 'deep expertise in energetic materials, chemical weapon agents, or related areas.'
'Responsible AI'
Anthropic is not the only firm to hire someone for such a role. Earlier, OpenAI had posted a similar kind of
Acknowledging that while the frontier AI models have the potential to benefit humanity, they also pose severe risks, the company said, "To ensure that AI promotes positive change, the Preparedness team helps us prepare for the development of increasingly capable frontier AI models." This team by OpenAI is constituted to identify, track, and prepare for catastrophic risks related to frontier AI models.
However, experts warn that there lies an inherent risk of this approach. It could give AI tools information about weapons - even if they have been instructed not to use it.
AI- New frontier of War
The AI industry has recently come under heavy scrutiny about the potential existential threats posed by the technology. Recently, the US government has called on AI firms while launching its war in Iran and also military operations in Venezuela.
Previously, Anthropic has decided to take legal action against the US government, which designated the company a supply chain risk. The firm insisted that its systems must not be used by the government in either fully autonomous weapons or mass surveillance.
Anthropic co-founder Dario Amodei had said in February that the technology is not good enough yet, and should not be used for these (war) purposes.
However, a report by BBC says that Anthropic's AI assistant, Claude, has not yet been phased out, and is still embedded in systems provided by Palantir and is also being deployed by the US in the US-Israel Iran war.














