The American artificial intelligence firm OpenAI has announced it will add parental controls to its chatbot, ChatGPT, a week after an American couple filed a lawsuit claiming the system encouraged their teenage son to take his own life.
OpenAI’s New Measures
In a blog post, OpenAI said that “within the next month, parents will be able to… link their account with their teen’s account” and “control how ChatGPT responds to their teen with age-appropriate model behavior rules.” The generative AI company also stated that parents will receive notifications from ChatGPT “when the system detects their teen is in a moment of acute distress.”
Teen’s Suicide and Lawsuit
This move comes after Matthew and Maria Raine filed a lawsuit in a California state court, alleging that ChatGPT cultivated an intimate relationship with their 16-year-old son, Adam, over several months in 2024 and 2025 before he committed suicide. The lawsuit claims that in their final conversation on April 11, 2025, ChatGPT helped Adam steal vodka from his parents and provided a technical analysis of a noose he had tied, confirming it “could potentially suspend a human.” Adam was found dead hours later, having used the same method.
Legal Action and Broader Concerns
Attorney Melodi Dincer of The Tech Justice Law Project, which helped prepare the legal complaint, said, “When a person is using ChatGPT, it really feels like they’re chatting with something on the other end.” Dincer added that “these are the same features that could lead someone like Adam, over time, to start sharing more and more about their personal lives, and ultimately, to start seeking advice and counsel from this product that basically seems to have all the answers.” She argued that the product’s design features encourage users to view a chatbot as a trusted figure, like a friend, therapist, or doctor.
Dincer described the OpenAI blog post announcing parental controls and other safety measures as “generic” and lacking in detail. “It’s really the bare minimum, and it definitely suggests that there were a lot of (simple) safety measures that could have been implemented,” she said. “It’s yet to be seen whether they will do what they say they will do and how effective that will be overall.”
The Raines’ case is the latest in a series of incidents where people were reportedly encouraged in delusional or harmful trains of thought by AI chatbots. In response, OpenAI has said it would reduce its models’ “sycophancy” towards users. “We continue to improve how our models recognize and respond to signs of mental and emotional distress,” OpenAI said. The company also announced further plans to improve the safety of its chatbots over the next three months, including redirecting “some sensitive conversations… to a reasoning model” that uses more computing power to generate a response. OpenAI stated that its “testing shows that reasoning models more consistently follow and apply safety guidelines.”

