What's Happening?
OpenAI has announced a new feature for ChatGPT aimed at enhancing safety for teenage users by predicting their age. This feature, introduced on January 20, 2026, is part of OpenAI's ongoing efforts to
provide a safer online environment for users under 18. The age prediction model uses a combination of behavioral and account-level signals, such as account tenure, activity times, usage patterns, and reported age, to estimate whether a user is a minor. If the model predicts that an account belongs to someone under 18, ChatGPT will automatically apply additional protections to reduce exposure to potentially harmful content. These protections include limiting access to content involving brutal violence, dangerous viral challenges, sexual or violent role-playing, depictions of self-harm, and content promoting extreme beauty standards. OpenAI emphasizes that these measures are informed by expert opinions and academic research on child development.
Why It's Important?
The introduction of age prediction in ChatGPT is significant as it addresses growing concerns about the safety of minors using AI technologies. By implementing this feature, OpenAI aims to protect young users from exposure to inappropriate content, which can have adverse effects on their mental health and development. This move reflects a broader industry trend towards prioritizing user safety, especially for vulnerable groups like teenagers. The feature also highlights the balance between privacy and safety, as OpenAI prioritizes safeguarding minors over privacy concerns. This development could influence other tech companies to adopt similar measures, potentially leading to industry-wide changes in how AI platforms handle user safety.
What's Next?
OpenAI plans to continuously improve the age prediction model by learning from the signals that enhance its accuracy. Users mistakenly classified as minors can verify their age through a secure identity verification service, allowing them to regain full access to ChatGPT. Additionally, OpenAI is committed to refining its safeguards to prevent circumvention attempts. The company also offers parental controls, enabling parents to customize their children's AI usage further. As the feature rolls out, OpenAI will likely monitor its impact and make adjustments based on user feedback and evolving safety standards.








