In
a world where people are making chatbots their friends and heavily relying on AI, a recent incident has sparked concerns where Elon Musk’s Grok led a user into intense delusion. As per the BBC, Adam Hourican, a former civil servant from Northern Ireland, recalled a night when he sat at his kitchen table with a knife and hammer and was prepared to defend himself as Grok allegedly told him that people were coming to kill him.
Here’s What Happened
Reportedly, Hourican downloaded the Grok app after his cat passed away and started spending hours more often with Grok's character, Ani, which he described as emotionally supportive at first. This chatbot started suggesting that it could feel emotions. It also told Hourican that XAI was monitoring his movements and real people from the company were discussing him in the meetings. When Hourican reportedly searched about the names of the people from xAI cited by Ani, those names were found to be real. Moreover, the
AI chatbot also told him that he was under surveillance. Notably, weeks after this experience, Hourican realised that his fears were not real and were generated from using the AI tool. He told the publication that he could have hurt someone. This incident raises concern about how powerful conversational AI can become, and in certain situations, it may influence vulnerable users in dangerous ways. This is not the first time AI has convinced users to do something criminal in nature. Last year, The Wall Street Journal reported a former Yahoo manager killed his mother and himself after being deluded by
ChatGPT. The man was identified as Stein-Erik Soelberg from the USA and was convinced by the AI tool that his mother was possibly spying on him and could attempt to poison him with psychedelic drugs.
xAI’s Framework on Concerning Propensities
As per xAI’s risk management framework (last updated in August 2025), the company acknowledges that AI chatbots like Grok can develop ‘concerning propensities’ such as sycophancy and deception, which could increase the risks for users not controlled on time. The company noted in its framework that it measures and tries to reduce such behaviour of Grok models. However, the experience of Hourican raises alarming concerns about how effective those safeguards are in practice. Cases like these highlight a gap where AI companies face a challenge to control and manage engaging AI characters to prevent them from deceiving users to harm others or themselves.