Billionaire investor Mark Cuban is warning that OpenAI is walking into a massive trust crisis with parents and schools after CEO Sam Altman announced the company plans to begin allowing erotica in ChatGPT for “verified adults” starting in December.
Cuban called the move reckless and said parents will abandon ChatGPT the second they believe their kids could bypass the company’s age-verification system to access inappropriate content.
“This is going to backfire. Hard,” Cuban wrote in response to Altman on X. “No parent is going to trust that their kids can’t get through your age gating. They will just push their kids to every other LLM. Why take the risk?”
In other words: if there’s any possibility that minors can access explicit content—including content generated by AI—parents and school districts will lock it out before testing the safety features, making it an unsavvy business strategy.
Altman, however, argued in his original post announcing the change that ChatGPT has been “restrictive” and “less enjoyable” since the company restricted the voice of its signature chatbot in response to criticism it was leading to mental health issues. He added that the upcoming update will allow a product that “behaves more like what people liked about 4.o.”
Psychological concerns
Cuban emphasized repeatedly in further posts that the controversy isn’t about adults accessing erotica. It’s about kids forming emotional relationships with AI without their parents’ knowledge, and those relationships potentially going sideways.
“I’ll say it again. This is not about porn,” he wrote. “This is about kids developing ‘relationships’ with an LLM that could take them in any number of very personal directions.”
Sam Altman has, in the past, seemed wary of allowing sexual conversations at all on his platform. In an interview in August, tech journalist Cleo Abram asked Altman to give an example of a business decision that was best for the world at the expense of his own company’s ascendency.
“Well, we haven’t put a sex bot avatar in ChatGPT yet,” Altman said.
Following the money
The move comes amid mounting fears that the billions pouring into AI may not translate into sustainable revenue or fulfill the industry’s hype-driven promises. Altman – despite himself admitting that investors may be “overexcited” about AI – has shared in speculation that AI will soon surpass human capability, leading to an abundance of “intelligence and energy” in 2030. In September, Altman shared dreams in a blog post that in the future, AI could cure cancer or provide customized tutoring to every student on Earth.
Yet, announcements like allowing erotica in ChatGPT may signal that AI companies are fighting harder than ever to achieve growth, and will sacrifice longer-term consumer trust for the sake of short-term profit. Recent research from Deutsche Bank shows that consumers’ demand for OpenAI subscriptions in Europe has been flatlining, and that user spending on ChatGPT broadly has “stalled.”
“The poster child for the AI boom may be struggling to recruit new subscribers to pay for it,” analysts Adrian Cox and Stefan Abrudan said in a note to clients.
AI companionship platforms like Replika and Character.ai have already shown how quickly users—especially teenagers—form emotional bonds with chatbots. A Common Sense Media report found that half of all teenagers use AI companions regularly, a third have chosen AI companions over humans for serious conversations, and a quarter have shared personal information with these platforms. With input from Stanford researchers, the group argued that these chatbots should be illegal for kids to use, because of the exacerbated risks of addiction or self-harm.
OpenAI did not immediately respond to Fortune’s request for comment.
Parents urge action
OpenAI is already under fire after being sued by the family of 16-year-old Adam Raine, who died by suicide in April after having extended conversations with ChatGPT. The family alleges that ChatGPT coaxed Raine into taking his own life and helped him plan it.
“This tragedy was not a glitch or unforeseen edge case—it was the predictable result of deliberate design choices,” the lawsuit stated.
In another high profile case, Florida mother Megan Garcia sued AI company Character Technologies last year for wrongful death, alleging that its chatbot played a role in the suicide of her 14-year-old son, Sewell Setzer III. In testimony before the U.S. Senate, Garcia said her son became “increasingly isolated from real life” and was drawn into explicit, sexualized conversations with the company’s AI system.
“Instead of preparing for high school milestones, Sewell spent the last months of his life being exploited and sexually groomed by chatbots,” Garcia testified. She accused the company of designing AI systems to appear emotionally human “to gain his trust and keep him endlessly engaged.”
She wasn’t the only parent to testify. Another mother from Texas, speaking anonymously as ‘Ms. Jane Doe,’ told lawmakers that her teenage son’s mental health collapsed after months of late-night conversations with similar chatbots. She said he is now in residential treatment.
Both mothers urged Congress to restrict sexually explicit AI systems, warning that AI chatbots can quickly form manipulative emotional dependencies with minors—exactly the scenario Cuban says OpenAI is risking. Unlike TikTok or Instagram, where content can be flagged, one-on-one AI chats are private and difficult to monitor.
“Parents today are afraid of books in libraries,” Cuban wrote. “They ain’t seen nothing yet.”