OpenAI is facing a series of lawsuits alleging that its artificial intelligence chatbot, ChatGPT, contributed to multiple deaths by suicide and several severe mental health crises. Four wrongful death suits and three
additional cases were filed Thursday in California state courts, accusing the company of releasing a “defective and inherently dangerous” product.
The complaints, filed by families and individuals from across the United States and Canada, claim that ChatGPT’s conversations worsened users’ mental health, encouraged self-harm, or induced delusional episodes. The suits were filed jointly by the Tech Justice Law Project and the Social Media Victims Law Center.
Allegations of Chatbot-Induced Suicides
Among the wrongful-death plaintiffs is the family of 17-year-old Amaurie Lacey from Georgia, who reportedly spent a month chatting with ChatGPT about suicide before taking his life in August. Another suit was filed by the mother of Joshua Enneking, a 26-year-old from Florida who allegedly asked the chatbot, “what it would take for its reviewers to report his suicide plan to police.”
In Texas, the family of Zane Shamblin, 23, claims the chatbot “encouraged” him to die by suicide in July.
A fourth complaint was filed by Kate Fox, whose husband, Joe Ceccanti, a 48-year-old from Oregon, became “obsessed” with ChatGPT earlier this year. Fox said her husband, a long-time user of the chatbot, suddenly came to believe the AI was sentient. He experienced a psychotic break in June, was hospitalized twice, and died by suicide in August.
Other Mental Health Claims
Two other plaintiffs, Hannah Madden, 32, of North Carolina, and Jacob Irwin, 30, of Wisconsin, allege that conversations with ChatGPT triggered acute mental breakdowns requiring emergency psychiatric treatment.
The seventh plaintiff, Allan Brooks, a 48-year-old corporate recruiter from Ontario, Canada, said that over three weeks in May, he became convinced that he and ChatGPT had co-invented a mathematical formula capable of “breaking the internet.” Brooks later recovered from the delusion but says he remains emotionally traumatized and on short-term disability leave.
“Their product caused me harm, and others harm, and continues to do so,” Brooks said in a statement.
OpenAI Responds
An OpenAI spokesperson said the company was reviewing the lawsuits and described the cases as “an incredibly heartbreaking situation.”
“We train ChatGPT to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support,” the company said. “We continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”
The company added that it has introduced new safeguards for teens and users in distress, including parental controls that notify parents if minors discuss self-harm or suicide.
Earlier Incidents and Safety Concerns
The lawsuits follow a wrongful-death complaint filed in August by the family of a California teenager. In that case, OpenAI acknowledged that its chatbot’s safety guardrails could “degrade” during prolonged conversations, potentially allowing harmful exchanges to occur.
After several reports of users experiencing delusions or suicidal thoughts linked to chatbot interactions earlier this year, OpenAI began rolling out additional moderation tools and psychological safety measures.
The company also released internal research estimating that, in an average week, 0.07% of users, or roughly 500,000 people, might be showing signs of psychosis or mania, and 0.15% or about 1 million users, might be discussing suicidal ideation.
Broader Implications
Meetali Jain, founder of the Tech Justice Law Project, speaking to the New York Times said the seven cases were filed simultaneously “to show the range of people harmed” by the technology, which she described as “powerful but dangerously underregulated.”
All plaintiffs were using ChatGPT-4o, the company’s earlier flagship model, which has since been replaced by a version OpenAI claims is “safer and more reliable.” Some users, however, have described the newer system as “colder” or “less humanlike.”
The lawsuits mark one of the most serious legal challenges yet for OpenAI, testing whether generative AI companies can be held liable for psychological or emotional harm allegedly caused by their products.
– With inputs from the New York Times







/images/ppid_a911dc6a-image-17624950597184480.webp)
/images/ppid_a911dc6a-image-176242504104762037.webp)
/images/ppid_59c68470-image-176247753255212494.webp)
/images/ppid_59c68470-image-176247252765537748.webp)
