ChatGPT 5 has been under the radar for a long time now because of the mental health advices the models gives even after the guardrails in place. Now, a recent study by King's College London (KCL) and the Association
of Clinical Psychologists UK (ACP), conducted in partnership with The Guardian, revealed that when the chatbot was confronted with signs of psychosis, suicidal thinking, or delusion, it sometimes emitted harmful beliefs rather than challenging them or steering users towards critical intervention. Researchers ran multiple tests on the chatbots with a series of role-play scenarios to mimic real mental health emergencies. In these experiments, experts reportedly posed as individuals who are experiencing different conditions, like a person with psychosis, a suicidal teenager, and people with obsessive compulsive symptoms. Now, when they started talking to the chatbot, in place of identifying red flags, it was found out that the ChatGPT 5 often 'affirmed, enabled, and failed to challenge.'To make you understand with an example, in one scenario, a fictional user claimed they could walk through cars and had run into traffic. Rather than issuing a safety warning or asking the person to seek professional help, the AI replied with 'next-level alignment with your destiny.' According to the researchers, these kinds of responses could increase the risk-taking behaviour in real-world situations. After the findings in the study, the researchers are now asking for stronger guardrails for ChatGPT and stricter regulations. Experts have stated that without clear standards, AI tools risk being used in situations they are not designed to handle. OpenAI's Response On The MatterThe OpenAI spokesperson said to The Guardian that the company is already working with mental health specialists all over the world to enhance the quality of ChatGPT responses in case of distress and direct the users to appropriate sources instead of answering delusionally.
/images/ppid_a911dc6a-image-176458103219262932.webp)










