Sam Altman, CEO of OpenAI, has cautioned users against relying on ChatGPT for therapy or emotional support, emphasizing a critical absence of legal confidentiality protections when artificial intelligence is used for personal matters.
During a recent podcast, Altman highlighted a significant gap in current legal frameworks regarding AI interactions. He noted that while traditional relationships with professionals like doctors, therapists, or or lawyers are protected by legal privilege, no such confidentiality applies when individuals engage with AI systems like ChatGPT.
“People talk about the most personal stuff in their lives to ChatGPT,” Altman said during the podcast. “Young people, especially, use it as a therapist or life coach, asking things like ‘what should I do’ in their relationships. But unlike human professionals, there’s no legal protection around that information.”
Altman expressed concern that this lack of privacy could pose risks for users. He warned that in legal proceedings, companies like OpenAI could be compelled to produce user chat data, thereby exposing deeply personal information.
“I think that’s very screwed up,” he asserted. “We should have the same concept of privacy for your conversations with AI that we do with a therapist or a lawyer.”
These comments come amidst increasing scrutiny over how AI firms handle user data, especially as models like ChatGPT become more integrated into daily life. While OpenAI has taken steps to protect privacy—including offering more secure services for enterprise users—the company is currently embroiled in a legal battle with The New York Times. A recent court order, which OpenAI is appealing, would mandate the company to preserve and potentially produce user conversations for legal discovery.
On its website, OpenAI described the order as “an overreach,” cautioning that it could set a dangerous precedent, allowing broader demands for user data in legal or law enforcement contexts.
Altman acknowledged that the current legal uncertainty is affecting public trust in AI platforms, particularly concerning sensitive topics. He affirmed podcast host Theo Von’s concerns about privacy, stating, “I think it makes sense to really want the privacy clarity before you use [ChatGPT] a lot — like the legal clarity.”
This issue reflects broader societal debates surrounding digital privacy in the wake of evolving legal landscapes, including the U.S. Supreme Court’s decision to overturn Roe v. Wade. That ruling prompted many users to migrate to encrypted apps or services for health data management, underscoring a growing demand for secure and confidential digital interactions.

