
OpenAI CEO Sam Altman recently highlighted a significant privacy concern regarding the use of ChatGPT. Altman cautioned that conversations with ChatGPT do not enjoy the same legal protections as those with therapists, doctors, or lawyers. This warning comes as more people, particularly younger users, turn to ChatGPT for guidance on personal issues, treating it as a virtual therapist or life coach.
During a discussion on the podcast "This Past Weekend" with Theo Von, Altman expressed his concerns about the absence of a policy framework to protect the privacy of these interactions.
He noted, "People talk about the most personal sh*t in their lives to ChatGPT. People use it - young people, especially, use it - as a therapist, a life coach; having these relationship problems and [asking] what should I do?"
Altman further elaborated on the legal implications, stating that while there is legal privilege for conversations with therapists, doctors, or lawyers, such protections do not currently extend to interactions with ChatGPT.
He acknowledged the urgent need to address this gap in policy, as the increasing reliance on AI for personal advice raises significant privacy concerns. "And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it. There's doctor-patient confidentiality, there's legal confidentiality, whatever. And we haven't figured that out yet for when you talk to ChatGPT," Altman explained.

Legal implications and privacy concerns
The potential for ChatGPT conversations to be used in legal proceedings is a significant concern. Altman highlighted that OpenAI could be compelled to produce such evidence under a subpoena or court order, even if the company disagrees with the mandate. This lack of legal protection poses a risk to users who may inadvertently disclose sensitive information during their interactions with the AI.
Altman advocates for the establishment of a human-AI chatbot privacy standard comparable to the confidentiality that exists between a patient and their therapist. This would provide users with the assurance that their interactions with AI tools are protected from legal scrutiny and misuse.
"No one had to think about that even a year ago, and now I think it's this huge issue of like, 'How are we gonna treat the laws around this?'" he noted. In addition to privacy concerns, Altman also raised broader issues related to the power and impact of artificial intelligence. He expressed apprehension about the unpredictable consequences of AI systems once they are deployed in the real world.
"I don't know who is using these tools and for what purposes, which adds to my fear," Altman admitted.
The conversation around AI privacy and legal protection is further complicated by the fact that user data may still be stored temporarily and reviewed for abuse monitoring or quality assurance. While OpenAI has stated that user data is not used to train the model if chat history and training are turned off, the potential for data retention remains a concern.