ChatGPT users seeking emotional support or therapy through the AI chatbot may be unknowingly putting their privacy at risk, according to OpenAI CEO Sam Altman.
Speaking on a recent episode of This Past Weekend with Theo Von, Altman raised concerns about the lack of legal safeguards around AI interactions, especially as more users turn to platforms like ChatGPT for guidance on deeply personal issues.
“People talk about the most personal sh** in their lives to ChatGPT,” Altman said. “People use it — young people especially — as a therapist, a life coach; having these relationship problems and [asking] ‘what should I do?’ And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there’s legal privilege for it… And we haven’t figured that out yet for when you talk to ChatGPT.”
Altman highlighted the risks this poses, particularly in legal situations. Without established privacy laws for AI interactions, companies like OpenAI could be compelled to hand over user conversations during lawsuits or investigations.
“I think that’s very screwed up,” Altman said. “I think we should have the same concept of privacy for your conversations with AI that we do with a therapist or whatever — and no one had to think about that even a year ago.”
Beyond the courtroom, the lack of clear privacy protections is already shaping user behavior. Podcast host Theo Von admitted to Altman that he doesn’t use ChatGPT much because of privacy concerns. Altman responded, “I think it makes sense… to really want the privacy clarity before you use [ChatGPT] a lot — like the legal clarity.”