What is the story about?
In a hiring move that reflects the changing priorities of artificial intelligence research, Google DeepMind has recruited a philosopher to work alongside its engineers and scientists.
The appointment underscores how questions once confined to academia, such as consciousness and ethics, are becoming central to the development of advanced AI systems.
The new recruit, Henry Shevlin, will focus on areas including machine consciousness, the evolving relationship between humans and AI, and preparedness for artificial general intelligence. He is expected to join the lab in May, while continuing his academic work in a part-time capacity.
The decision highlights a broader shift within leading AI companies, where technical progress is increasingly intertwined with philosophical inquiry. As AI systems grow more capable, the need to understand not just how they work, but what they mean, is becoming harder to ignore.
Shevlin currently serves as Associate Director (Education) at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge. His research spans cognitive science, AI ethics, and the study of consciousness, making him a natural fit for DeepMind’s expanding focus on foundational questions.
Announcing the move on social media, Shevlin described the role as a rare opportunity to work on ideas he has explored throughout his career, now backed by the resources of one of the world’s leading AI labs. He also confirmed that he will continue teaching and conducting research at Cambridge on a part-time basis.
His academic journey began at the University of Oxford, where he studied Classics and Philosophy, before moving to the CUNY Graduate Center in the United States to complete his PhD. During his time in New York, he also taught at Baruch College.
Beyond formal academia, Shevlin’s work reflects a wide range of intellectual interests, from animal cognition and neuroscience to game theory and science fiction, illustrating the interdisciplinary nature of the questions he will now tackle at DeepMind.
DeepMind’s decision is not an isolated one. AI companies are increasingly recognising that building advanced systems requires more than technical expertise.
Questions about alignment, ethics, and the nature of intelligence itself demand input from disciplines traditionally seen as separate from engineering.
Last year, rival AI firm Anthropic made a similar move by appointing Amanda Askell as an in-house philosopher to work on AI alignment and fine-tuning.
Such hires point to a growing awareness that as AI systems approach more general capabilities, the risks and implications become more complex. Issues like whether machines could possess forms of consciousness, how humans should interact with them, and how to prepare for AGI are no longer abstract debates, but practical challenges facing the industry.
By bringing philosophers into the fold, companies like DeepMind are attempting to bridge the gap between rapid technological progress and the deeper questions it raises. In doing so, they are signalling that the future of AI will be shaped not just by code, but by ideas about what intelligence, responsibility, and consciousness truly mean.
The appointment underscores how questions once confined to academia, such as consciousness and ethics, are becoming central to the development of advanced AI systems.
The new recruit, Henry Shevlin, will focus on areas including machine consciousness, the evolving relationship between humans and AI, and preparedness for artificial general intelligence. He is expected to join the lab in May, while continuing his academic work in a part-time capacity.
The decision highlights a broader shift within leading AI companies, where technical progress is increasingly intertwined with philosophical inquiry. As AI systems grow more capable, the need to understand not just how they work, but what they mean, is becoming harder to ignore.
From Cambridge academia to AI frontline
Shevlin currently serves as Associate Director (Education) at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge. His research spans cognitive science, AI ethics, and the study of consciousness, making him a natural fit for DeepMind’s expanding focus on foundational questions.
Announcing the move on social media, Shevlin described the role as a rare opportunity to work on ideas he has explored throughout his career, now backed by the resources of one of the world’s leading AI labs. He also confirmed that he will continue teaching and conducting research at Cambridge on a part-time basis.
His academic journey began at the University of Oxford, where he studied Classics and Philosophy, before moving to the CUNY Graduate Center in the United States to complete his PhD. During his time in New York, he also taught at Baruch College.
Beyond formal academia, Shevlin’s work reflects a wide range of intellectual interests, from animal cognition and neuroscience to game theory and science fiction, illustrating the interdisciplinary nature of the questions he will now tackle at DeepMind.
Why AI companies are hiring philosophers
DeepMind’s decision is not an isolated one. AI companies are increasingly recognising that building advanced systems requires more than technical expertise.
Questions about alignment, ethics, and the nature of intelligence itself demand input from disciplines traditionally seen as separate from engineering.
Last year, rival AI firm Anthropic made a similar move by appointing Amanda Askell as an in-house philosopher to work on AI alignment and fine-tuning.
Such hires point to a growing awareness that as AI systems approach more general capabilities, the risks and implications become more complex. Issues like whether machines could possess forms of consciousness, how humans should interact with them, and how to prepare for AGI are no longer abstract debates, but practical challenges facing the industry.
By bringing philosophers into the fold, companies like DeepMind are attempting to bridge the gap between rapid technological progress and the deeper questions it raises. In doing so, they are signalling that the future of AI will be shaped not just by code, but by ideas about what intelligence, responsibility, and consciousness truly mean.















