Meta is adding new safeguards to its artificial intelligence (AI) products for teenagers by training its systems to avoid flirty conversations and discussions of self-harm or suicide with minors. Additionally, the company is temporarily limiting their access to certain AI characters.
In an email on Friday, Meta spokesperson Andy Stone said the company is taking these temporary steps while it develops longer-term measures to ensure teens have safe, age-appropriate AI experiences. Stone noted that the safeguards are already being rolled out and will be adjusted over time as the company refines its systems.
Meta’s AI policies came under intense scrutiny and backlash following a Reuters report. Earlier this month, US Senator Josh Hawley launched a probe into the Facebook parent company’s AI policies, demanding documents on rules that allowed its chatbots to interact inappropriately with minors. Both Democrats and Republicans in Congress have expressed alarm over the rules outlined in an internal Meta document, which was first reviewed by Reuters.
Meta had confirmed the authenticity of the document but stated that after receiving questions from Reuters earlier this month, the company removed portions that said it was permissible for chatbots to flirt and engage in romantic role-play with children. “The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,” Stone said earlier this month.

