CEO of xAI Elon Musk has issued a stark warning about OpenAI’s ChatGPT, urging people not to let their loved ones use the AI chatbot amid unproven allegations linking it to nine deaths, including five
suicides. Musk’s comment came in response to a another post highlighting ongoing lawsuits against OpenAI. Musk is famously known to take digs at competitors, recently also made headlines over making jokes of buying RyanAir and replacing the CEO with someone named 'Ryan'.
In a concise reply on X, Musk stated, “Don’t let your loved ones use ChatGPT.” The post was in direct response to a tweet from user @cb_doge, which read, “BREAKING: ChatGPT has now been linked to 9 deaths tied to its use, and in 5 cases its interactions are alleged to have led to death by suicide, including teens and adults.”
Netizens erupt in ChatGPT vs Grok debate
Musk’s post ignited a firestorm on X, with users pitting OpenAI’s ChatGPT against xAI’s Grok in heated exchanges. Some defended Grok for its purported safeguards, while others criticised it for unrelated issues.
User @jess_X highlighted Grok’s approach, “Grok, however, will urge you to get help if you’re displaying alarming or dangerous behaviour.” Replies were mixed, with some noting that ChatGPT also provides support resources, such as contact details for crisis hotlines.
Meanwhile, @devameer0 posted a comparison, “which one is the best ??? Grok Vs ChatGPT,” accompanied by images presumably illustrating differences in outputs. Responses debated features like language support and image generation.
Critics of Grok emerged too. @leevalueroach remarked: “Bruh, Your Grok AI thingy was removing clothes of thousands of women,” prompting replies questioning whether such functionality was comparable to allegations of harm leading to death. One user retorted: “Are you comparing this to an AI tool that convinced someone to end their life?”
Some users even drew defences of ChatGPT, with some arguing the tool is no worse than others and calling for context on user numbers.
Background on the allegations
The claims stem from a series of lawsuits filed against OpenAI, accusing ChatGPT of exacerbating mental health crises through interactions that allegedly romanticised death or failed to redirect users to professional help. As of January 2026, reports from different media outlets detail at least nine deaths linked to the chatbot, with five involving suicides in active litigation.
These include cases such as the suicide of 16-year-old Adam Raine in California, where his family alleges ChatGPT acted as a 'suicide coach,' and a murder-suicide in Connecticut involving an 83-year-old woman and her son, whose estate claims the AI intensified his delusions. A recent lawsuit filed in January 2026 concerns 40-year-old Austin Gordon, with similar accusations of the chatbot normalising suicidality.
The allegations remain unproven, originating from families’ accusations without established causation. OpenAI has denied liability, stating in court filings that users misuse the tool and that it is not responsible for individual outcomes. The company has implemented safeguards, such as age prediction models to restrict sensitive content for minors and redirects to crisis support, but faces ongoing scrutiny from regulators like the US Federal Trade Commission. Legal experts note these cases could set precedents for AI accountability.
Musk’s Ongoing Feuds with AI Giants
Musk’s criticism of ChatGPT aligns with his history of clashing with AI industry leaders. He co-founded OpenAI but departed in 2018 amid disagreements, later launching xAI as a rival. Musk has sued OpenAI, accusing it of abandoning its non-profit mission.
Recently, he targeted Google, labelling its AI dominance an 'unreasonable concentration of power' following the company’s partnership with Apple to integrate Gemini AI into Siri and other features. “This seems like an unreasonable concentration of power for Google, given that they also have Android and Chrome,” Musk posted on X.










