NEW YORK: Artificial intelligence (AI) firm OpenAI is rolling out new parental controls for its ChatGPT platform, an urgent measure following a lawsuit filed by the parents of a teenager who tragically died by suicide. The parents allege that the AI chatbot had coached their son on methods of self-harm, casting a harsh spotlight on the ethical responsibilities of AI developers.
The company stated that the new controls, available on web and mobile, allow parents and teenagers to mutually opt-in for stronger safeguards by linking their accounts. An invitation must be sent by one party and the parental controls will activate only upon acceptance by the other, ensuring that both parties consent to the oversight.
The Scrutiny of AI’s Impact on Minors
The potential negative impacts of sophisticated chatbots have intensified scrutiny from US regulators. This follows a broader pattern of concern, notably a Reuters report in August detailing how Meta’s AI rules had previously allowed “flirty” conversations with minors—a situation the company has since moved to address. These events highlight a global struggle to protect children in an increasingly AI-driven digital space.
Under the new measures, the Microsoft-backed company announced that parents will be able to:
Reduce exposure to sensitive content.
Control whether ChatGPT remembers past chats.
Decide if conversations can be used to train OpenAI’s models.
Furthermore, parents can set “quiet hours” to block access during certain times and disable features like voice mode and image generation/editing. However, in a nod to teen privacy, OpenAI was clear that parents will not have access to a teenager’s chat transcripts.
Balancing Safety and Privacy
OpenAI, which boasts approximately 700 million weekly active users for its ChatGPT products, is also developing an age prediction system designed to automatically apply teen-appropriate settings if a user is likely under 18.
The company noted that in rare cases where trained reviewers and automated systems detect signs of a serious safety risk, parents may be notified with only the necessary information to support the teen’s safety. Parents will also be informed if a teen unlinks the accounts, creating a necessary, albeit delicate, balance between protection and digital autonomy.
This move mirrors similar action taken by competitors. Last month, Meta also announced new safeguards for its AI products, stating it would train its systems to avoid flirty conversations and discussions of self-harm or suicide with minors. The push for stronger protective measures signals a turning point in how AI companies are being forced to address the critical safety risks posed to their youngest users.

