After facing massive backlash over ChatGPT handling self-harm conversations, Sam Altman’s OpenAI has finally announced a new safety feature for its AI chatbot -- Trusted Contact. This optional feature is designed to help users during serious mental health crises. It allows adult users to add someone they trust -- such as a family member, close friend, or caregiver -- who will receive an alert if ChatGPT detects signs of possible self-harm or suicide risk."Today we are starting to roll out Trusted Contact, an optional safety feature in ChatGPT that allows adults to nominate someone they trust, who may be notified if our automated systems and trained reviewers detect the enrolled person may have discussed harming themselves in a way that indicates
a serious safety concern," OpenAI said in a blog post."Trusted Contact is designed to offer another layer of support alongside the localised helplines already available in ChatGPT, by helping users connect to a person they trust when they are in crisis," it added.How To Create Real-Life Like Images Using ChatGPT Images 2.0
According to the company, people often use ChatGPT to discuss personal struggles, emotional stress, and difficult situations. With Trusted Contact, OpenAI wants to encourage users to connect with real people during moments of crisis instead of staying isolated.The feature works through a multi-step review process. Users above 18 years of age can add one trusted adult contact through ChatGPT settings. The selected person must accept the invitation before the feature becomes active.If ChatGPT’s automated systems detect conversations that may indicate a serious safety concern, the platform will first encourage the user to contact their trusted person directly. OpenAI says users may also see suggested conversation starters to make it easier to reach out.After this, a specially trained human review team checks the situation. If reviewers believe the risk is serious, ChatGPT can send a brief notification to the Trusted Contact through email, text message, or in-app notification.Importantly, OpenAI says the alerts will not include chat transcripts or private conversation details. The notification will only mention that self-harm discussions may have occurred in a concerning way and encourage the contact to check in on the person.The company says Trusted Contact is not meant to replace professional mental health support or emergency services. ChatGPT will still recommend crisis helplines and emergency assistance whenever needed.OpenAI claims the feature was developed with guidance from mental health experts, clinicians, suicide prevention organisations, and the American Psychological Association. The company also noted that users can remove or change their Trusted Contact anytime from settings.