AI safety engineer at Anthropic, Mrinank Sharma, has resigned from the company, saying the 'world is in peril' -- not only from artificial intelligence (AI) or bioweapons but fro 'series of interconnected crises' unfolding at the same time across the globe. In his message on the Elon Musk-owned social media platform X , the Indian-origin researcher thanked the team and described his time at Anthropic as meaningful and inspiring. He joined the company around two years ago after completing his PhD and moving to San Francisco with the aim of contributing to AI safety research. During his stint, he worked on several high-impact areas, including studying AI sycophancy -- a tendency of AI systems to agree too easily with users -- and building safeguards
to reduce risks linked to AI-assisted biological threats.
He said some of those safeguards were deployed into real-world systems.Despite expressing pride in his work, Sharma said he felt it was time to move on. In his letter, he wrote that he has been 'reckoning with our situation' and believes the world is facing interconnected challenges not only from artificial intelligence but also from other technological and societal risks."Dear Colleagues, I've decided to leave Anthropic. My last day will be February 9th. Thank you. There is so much here that inspires and has inspired me. To name some of those things: a sincere desire and drive to show up in such a challenging situation, and aspire to contribute in an impactful and high-integrity way; a willingness to make difficult decisions and stand for what is good; an unreasonable amount of intellectual brilliance and determination; and, of course, the considerable kindness that pervades our culture. I've achieved what I wanted to here. I arrived in San Francisco two years ago, having wrapped up my PhD and wanting to contribute to Al safety. I feel lucky to have been able to contribute to what I have here: understanding AI sycophancy and its causes; developing defences to reduce risks from AI-assisted bioterrorism; actually putting those defences into production; and writing one of the first AI safety cases. I'm especially proud of my recent efforts to help us live our values via internal transparency mechanisms; and also my final project on understanding how Al assistants could make us less human or distort our humanity. Thank you for your trust.Nevertheless, it is clear to me that the time has come to move on. I continuously find myself reckoning with our situation. The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment.! We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences. Moreover, throughout my time here, I've repeatedly seen how hard it is to truly let our values govern our actions. I've seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society too," he wrote. For now, Sharma does not have a fixed next role. Instead, he plans to take time off to explore writing and academic interests beyond engineering.
/images/ppid_a911dc6a-image-177073402422810189.webp)

/images/ppid_a911dc6a-image-177073523177496028.webp)
/images/ppid_59c68470-image-177073503438256591.webp)
/images/ppid_59c68470-image-177073507763571006.webp)
/images/ppid_59c68470-image-177073507927493167.webp)
/images/ppid_59c68470-image-177073504502237506.webp)




/images/ppid_a911dc6a-image-177073422588955268.webp)