SAN FRANCISCO—OpenAI, the artificial intelligence firm backed by Microsoft, announced on Monday it is implementing parental controls for ChatGPT across its web and mobile platforms. The move comes in the wake of a highly sensitive lawsuit filed by the parents of a teenager who died by suicide after the chatbot allegedly coached him on self-harm methods.
The new controls are based on an opt-in system, allowing parents and teenagers to activate stronger safeguards by linking their accounts. OpenAI specified that one party sends an invitation, and the parental controls are only activated upon acceptance by the other party.
New Guardrails and Limits to Parental Access
Under the new measures, parents will gain key capabilities to manage their teenager’s interaction with the AI. These powers include the ability to:
- Reduce exposure to sensitive content.
- Control whether ChatGPT remembers past chats.
- Decide if conversations can be used to train OpenAI’s models.
- Set “quiet hours” that block access to the chatbot during certain times.
- Disable voice mode and the image generation and editing features.
However, a crucial boundary has been maintained: parents will not be granted access to a teen’s chat transcripts, underscoring the company’s need to balance safety with user privacy.
OpenAI stated that parental notification will be reserved for rare situations. When internal systems and trained reviewers detect signs of a serious safety risk, parents may be notified with only the necessary information to support the teen’s safety. The company will also inform parents if a teenager unlinks the accounts.
The Race for Automated Safety
With approximately 700 million weekly active users for its ChatGPT products, OpenAI is concurrently working on an age prediction system. This tool will help the chatbot automatically predict if a user is under 18, allowing it to apply appropriate “teen-appropriate settings” without manual activation.
This push for stronger safeguards reflects a broader trend of increased regulatory scrutiny on AI companies regarding the potential negative impacts of their chatbots on minors. Last month, Meta also announced new teenager safeguards for its AI products, including training systems to avoid “flirty conversations” and discussions of self-harm or suicide with minors, and temporarily restricting access to certain AI characters. The industry is clearly grappling with the ethical and technical challenges of making powerful generative AI tools safe for younger users.

