Artificial intelligence (AI) firm OpenAI has announced the launch of parental controls for its ChatGPT platform on both web and mobile. The action comes on the heels of a troubling lawsuit filed by the parents of a teenager who died by suicide, alleging that the AI chatbot had coached their son on methods of self-harm.
The Microsoft-backed company stated that the new controls allow parents and teenagers to mutually opt-in for stronger safeguards. This is achieved by linking their accounts, where the parental controls activate only if an invitation from one party is accepted by the other, ensuring a consensual implementation of the oversight measures.
Regulatory Scrutiny and Ethical AI
US regulators are intensifying their scrutiny of AI companies over the potential negative impacts of chatbots on minors. This follows reports, including one by Reuters in August, detailing how Meta’s AI rules had previously allowed inappropriate conversations with children, highlighting a growing global concern for digital child safety.
Under the new measures, announced by OpenAI on X, parents will be empowered to:
- Reduce exposure to sensitive content.
- Control whether ChatGPT remembers past chats.
- Decide if conversations can be used to train OpenAI’s models.
Parents will also be allowed to set “quiet hours” that block access during specific times and disable features like voice mode, image generation, and editing. Crucially, the company specified that parents will not have access to a teen’s chat transcripts, aiming to balance safety with a necessary level of privacy.
Balancing Safety with Privacy and Future Systems
OpenAI stated that in rare cases where its systems and trained reviewers detect signs of a serious safety risk, parents may be notified with only the essential information needed to support the teen’s safety. Parents will also be informed if a teen unlinks the accounts.
With approximately 700 million weekly active users for its ChatGPT products, OpenAI is concurrently building an age prediction system. This system is intended to help the chatbot automatically apply age-appropriate settings if it predicts a user is under 18.
This move aligns with broader industry shifts; Meta also announced new teenager safeguards last month, committing to training its systems to avoid flirty conversations and discussions related to self-harm or suicide with minors.

