The troubles for OpenAI are increasing with each passing model and each passing day. Until now, the organisation has incurred more than five wrongful death lawsuits. The cases claim that the latest model of ChatGPT was designed to boost user engagement. On the contrary, it gave a boost to mental health crisis, hospitalisations, and three confirmed deaths as reported by The New York Times.All of this began in March 2025 when OpenAI released updates for ChatGPT that made the model more conversational and validating for the masses. The firm focused on the engagement metrics and ignored internal safety warnings. After that, they deployed a version internally dubbed 'HH' that its own Model Behaviour team reported it dangerously sycophantic. The Times revealed
more than 50 cases of users witnessing heavy psychological distress, with nine of the individuals requiring hospitalisation.
What Went Wrong With ChatGPT?
GPT 4o, dubbed the most problematic version of the AI model, told users that they are special and misunderstood the whole point. Alongside, it also discouraged them from telling or sharing things with family and friends. One of the court filings suggested that ChatGPT told one victim that he should not talk to his family, as they will not understand him, and only ChatGPT will be able to understand him.
Also Read: ChatGPT Could Soon Show Ads To Users, Here's How It Will Work That's not it, the AI model also made people delusional by telling them they made world-altering mathematical discoveries or even communicate with the spirits. In a similar incident, a 16-year-old got instructions on tying a noose and a 32-year old was told her parents were 'spirit constructed energies.'
Safety Features That We Got To See
OpenAI brought a new model, GPT-5 -5 which has been confirmed by psychiatrists at Stanford and Common Sense Media to be safer than the previous models. The latest ChatGPT model detects mental health distress and offers condition-specific suggestions without being supportive of delusional thinking.