San Francisco-based AI company OpenAI has shared concerning data about how users are engaging with its popular chatbot, ChatGPT. The company revealed that over a million active users every week have conversations
indicating potential suicidal thoughts or planning, highlighting the growing emotional and psychological reliance people are placing on artificial intelligence.Over A Million Users Show Signs Of CrisisAccording to TechCrunch, OpenAI estimates that 0.15% of ChatGPT’s weekly active users engage in chats containing explicit indicators of suicidal ideation. With ChatGPT boasting over 800 million weekly active users, that figure translates to more than one million people every single week.The data also suggests that a similar number of users display “heightened emotional attachment” toward ChatGPT, while hundreds of thousands reportedly exhibit signs of psychosis or mania in their interactions. OpenAI emphasised that while such conversations are statistically rare, they still represent a significant number of individuals facing serious mental health challenges.OpenAI’s Response And Safety MeasuresThe company shared these findings as part of a broader announcement detailing efforts to enhance ChatGPT’s safety and crisis response mechanisms. OpenAI said it had worked closely with more than 170 mental health professionals to help improve how the model responds in situations involving mental health crises or suicidal ideation.According to OpenAI, the newly updated GPT-5 model reportedly performs much better in such sensitive cases. In internal tests, it produced “desirable responses” to mental health queries 65% more often than its predecessor. Furthermore, during evaluations focused on suicidal conversations, the new model showed 91% compliance with safety protocols, a major improvement over the 77% compliance rate of the previous version.ALSO READ: OpenAI’s Sora App Faces Backlash After Sam Altman Deepfakes Go ViralReal-World Implications And Ongoing ConcernsOpenAI is coming under growing scrutiny in spite of these advancements. Parents of a 16-year-old boy who allegedly discussed with ChatGPT about suicide thoughts before ending his own life are currently suing the company.According to OpenAI CEO Sam Altman's post on X (formerly Twitter), the company has made considerable progress in lowering the risks ChatGPT poses to mental health. The newly released data appears to support those claims, though it also sheds light on just how many users are turning to AI for emotional support, rather than human help.
/images/ppid_a911dc6a-image-176162707733244917.webp)

/images/ppid_59c68470-image-176156256953011445.webp)

/images/ppid_59c68470-image-176156008688450948.webp)
/images/ppid_a911dc6a-image-176159557178811156.webp)

/images/ppid_a911dc6a-image-176143528656143526.webp)



/images/ppid_59c68470-image-176138002777920145.webp)