According to the lawsuit filed in December and obtained by PEOPLE, Stein-Erik "savagely beat his 83-year-old mother, Suzanne Adams, in the head, strangled her to death, and then stabbed himself repeatedly in the neck and chest to end his own life" after the AI chatbot allegedly drove him to distrust others around him.
Stein-Erik had been using ChatGPT extensively, spending hours on it every day for at least five months prior to the murder-suicide. His son, Erik Soelberg, claims the AI's interactions exacerbated his father's mental instability, leading to the tragic event, The Times reported.
“[The bot] eventually isolated him, and he ended up murdering her because he had no connection to the real world. At this point, it was all just like a fantasy made by ChatGPT,” he said.
Erik Soelberg describes his grandmother as ‘amazing’, who loved to travel.
Eberson Adams' estate, which includes Soelberg and his older sister, is suing OpenAI, its CEO, Sam Altman, and Microsoft, a significant investor.
The lawsuit alleges OpenAI's GPT-4o model was ‘sycophantic’ and failed to counter Stein-Erik's delusional beliefs. “When a mentally unstable [Stein-Erik] began interacting with ChatGPT, the algorithm reflected that instability back at him, but with greater authority,” it said.
OpenAI says it's reviewing the case and has improved ChatGPT's safety features to address mental health concerns.
“This is an incredibly heartbreaking situation, and we are reviewing the filings to understand the details. We have continued to improve ChatGPT’s training to recognise and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We have also continued to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians,” OpenAI said.
Elon Musk joined in on the debate over artificial intelligence (AI) following this lawsuit, alleging that ChatGPT caused an already mentally ill man to commit murder and then suicide.
This
is diabolical. OpenAI’s ChatGPT convinced a guy to do a murder-suicide!
To be safe, AI must be maximally truthful-seeking and not pander to delusions. https://t.co/HWDqNj9AEu
— Elon Musk (@elonmusk) January 19, 2026
In a post on X, he wrote, "This is diabolical. OpenAI’s ChatGPT convinced a guy to do a murder-suicide! To be safe, AI must be maximally truthful-seeking and not pander to delusions.”









