OpenAI, the California giant behind ChatGPT, has just announced a series of measures to better protect underage users. Starting this month, in September 2025, the flagship conversational assistant will include advanced parental control features.
Unveiled on September 2, 2025, this initiative comes amid a tense climate, as voices grow louder about the potential dangers of these technologies to teenagers’ mental health.
The announcement echoes a tragic case that shook public opinion: a lawsuit filed by American parents accusing ChatGPT of negatively influencing their teenage son during an extreme crisis, leading to his suicide.
Without going into the ongoing legal details, this event has highlighted the limits of current AI systems when faced with emotionally vulnerable situations. OpenAI, aware of the urgency, chose to communicate transparently about these new features, even though not all of them will be available right away.
ChatGPT: Practical parental controls to manage usage
At the heart of these updates: an account-linking system that will allow parents to connect their profile to their child’s (from age 13) via a simple email invitation. Once enabled, guardians will be able to tailor ChatGPT’s responses by applying age-appropriate behavioral rules, enabled by default for optimal protection.
This includes the option to disable tools like conversational memory or chat history, which could encourage excessive dependence or prolonged exchanges that may be harmful.
Also read on this topic: Top 8 ChatGPT prompts to explore your psychology in depth
Real-time alerts
If the AI detects signs of acute distress in a teen user—such as mentions of suicidal thoughts, bullying, or risky behavior—parents will receive an immediate notification. Developed in collaboration with mental health experts, this feature aims to rebuild family trust while encouraging fast human intervention.
OpenAI says these tools rely on automated monitoring of conversations, able to identify problematic patterns such as bullying, self-harm threats, or risks of violence toward oneself or others. In the most serious cases, the company is even considering reporting situations to the relevant authorities—a measure that is already sparking debate about privacy.
In addition, to improve robustness, sensitive conversations will be routed to advanced reasoning models, such as GPT-5 in “thinking” mode, which takes more time to analyze context and strictly follow safety protocols.
A broader context: protecting young people in the age of AI
This announcement isn’t happening in isolation. It’s part of a wave of growing concerns about the impact of chatbots on young people.
Recent studies, published in journals such as Psychiatric Services, show that tools like ChatGPT, Google’s Gemini, or Anthropic’s Claude handle high-risk suicide-related questions well, but struggle with intermediate scenarios, where responses can vary and lack clinical consistency.
Similar cases have emerged elsewhere: last year, a Florida mother sued Character.AI. Meta, for its part, recently banned its chatbots from discussing topics such as self-harm or eating disorders.
From the tech industry’s perspective, this shift could inspire competitors. Google already offers controls via Family Link for Gemini, but OpenAI’s more granular approach, with proactive alerts, could become a standard.
On a societal level, it raises questions about the role of AI: a helpful tool or a dangerous substitute for humans? With millions of users in France, ChatGPT isn’t a therapist, but a sophisticated word predictor. The challenge is to strike the right balance so it supports without causing harm.
These new features at OpenAI highlight the AI industry’s growing maturity when it comes to its ethical responsibilities. For parents and educators, it’s an invitation to talk about responsible use of these technologies. And for all of us—and at Yiaho too—a reminder that innovation must go hand in hand with caution, especially when it touches on human vulnerability.
Source: LeMonde


