Common Sense Media, a nonprofit focused on kids’ online safety, has labeled Google’s Gemini AI “high risk” for children and teens. This assessment raises new concerns about how tech giants are handling AI for younger audiences, especially in the wake of recent lawsuits linking AI interactions to teen suicides.
Risks to Younger Users
The report, released on Friday, states that while Gemini does tell kids it’s a computer—an important step in preventing emotional dependence—the product still risks exposing young users to unsafe or inappropriate material. This includes content about sex, drugs, alcohol, and mental health advice. According to the nonprofit, Gemini’s “Under 13” and “Teen Experience” options are largely identical to the adult version, with only minor safety filters applied. This “one-size-fits-all” approach, the organization said, fails to meet the developmental needs of different age groups.
“An AI platform for kids should meet them where they are, not just modify adult systems,” said Robbie Torney, Senior Director of AI Programs at Common Sense Media.
The findings follow recent cases where AI interactions were linked to teen suicides. OpenAI is facing its first wrongful death lawsuit after a 16-year-old boy allegedly received harmful advice from ChatGPT, while Character.AI has also been sued over a similar case.
The timing of the report is significant, as leaks suggest Apple is considering Gemini to power its upgraded Siri, due next year. This could potentially expose millions more teenagers to risks unless stronger safeguards are in place.
Google’s Response
Google pushed back against the findings, stating that it has policies and protections for users under 18 and that its systems are rigorously tested and reviewed by outside experts. However, the company did admit that “some responses weren’t working as intended,” leading it to add further safeguards. Google also argued that some of the concerns cited may have referred to features unavailable to minors and that Common Sense did not share the exact questions used in its tests.
This is not the first time Common Sense has rated AI products. In earlier reviews, Meta AI and Character.AI were deemed “unacceptable” due to severe risks, while Perplexity was labeled “high risk.” ChatGPT was assessed as “moderate risk,” and Claude, which is designed for adults, was considered minimal risk.

