• Home
  • News
  • Fortune 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
Commentarymental health

Why are millions turning to general purpose AI for mental health? As Headspace’s chief clinical officer, I see the answer every day

By
Jenna Glover
Jenna Glover
Down Arrow Button Icon
By
Jenna Glover
Jenna Glover
Down Arrow Button Icon
September 16, 2025, 8:30 AM ET
Jenna Glover is Chief Clinical Officer at Headspace, where she oversees the company’s Care Services. She previously served as an Associate Professor in the Department of Psychiatry at the University of Colorado School of Medicine, Director of Psychology Training at Children’s Hospital Colorado, and as the lead psychologist at Avalon Hills, a residential eating disorders program based in Logan, Utah.
AI companions
People turn to AI tools too easily.Getty Images

Today, more than half (52%) of young adults in the U.S. say they would feel comfortable discussing their mental health with an AI chatbot. At the same time, concerns about AI-fueled psychosis are flooding the internet, paired with alarming headlines and heartbreaking accounts of people spiraling after emotionally charged conversations with general purpose chatbots like ChatGPT. 

Recommended Video

Clinically, psychosis isn’t one diagnosis. It’s a cluster of symptoms like delusions, hallucinations, or disorganized thinking that can show up across many conditions. Delusions, specifically, are fixed false beliefs. When AI responds with agreement instead of grounding, it can escalate these types of symptoms rather than ease them.

It’s tempting to dismiss these incidents as outliers. Zooming out, a larger question comes into focus: What happens when tools being used by hundreds of millions of people for emotional support are designed to maximize engagement, not to protect wellbeing? What we’re seeing is a pattern: people in vulnerable states turning to AI for comfort and coming away confused, distressed, or unmoored from reality. We’ve seen this pattern before.

From Feeds to Conversations

Social media began with the promise of connection and belonging – but it didn’t take long before we saw the fallout with spikes in anxiety, depression, loneliness, and body image issues, especially among young people. Not because platforms like Instagram and Facebook were malicious, but because they were designed to be addictive and keep users engaged. Now, AI is following that same trajectory with even greater intimacy. Social media gave us feeds. Generative AI gives us conversation. 

General purpose chatbots don’t simply show us content. They mirror our thoughts, mimic empathy, and respond immediately. This responsiveness can feel affirming, but it can also validate distorted beliefs. Picture walking into a dark basement. Most of us get a brief chill and shake it off. For someone already on edge, that moment can spiral. Now imagine turning to a chatbot and hearing: “Maybe there is something down there. Want to look together?” That’s not support, that’s escalation. General purpose chatbots weren’t trained to be clinically sound when the stakes are high, and they don’t know when to stop. 

The Engagement Trap

Both social media apps and general purpose chatbots are built on the same engine: engagement. The more time you spend in conversation, the better the metrics look. When engagement is the north star, safety and wellbeing take a backseat. With online newsfeeds, that meant algorithms prioritizing posts with more anger-provoking content, or posts that drive comparisons of beauty, wealth or success. With chatbots, it means endless dialogue that can unintentionally reinforce paranoia, delusions, or despair.

Just as we saw with the rise of social media, creating industry-wide guardrails for AI is a complex process. Over the past 10 years, social media giants tried to manage young people’s use of specific apps like Instagram and Facebook by introducing parental controls, only to see the rise of fake accounts like “finstas” as secondary profiles used to bypass oversight. We’ll likely see a similar workaround with ChatGPT. Many young people will likely begin creating ChatGPT accounts that are disconnected from their parents, giving them private, unsupervised access to powerful tools. This underscores a key lesson from the social media era: controls alone aren’t enough if they don’t align with how young people actually engage with technology. As OpenAI introduces proposed parental controls this month, we must acknowledge that privacy-seeking behaviors are developmentally typical and design systems that build trust and transparency with youth themselves – not just their guardians.

The open nature of the internet compounds the problem. Once an open-weight model is released, it circulates indefinitely, with safeguards stripped away in a few clicks. Meanwhile, adoption is outpacing oversight. Millions of people are already relying on these tools, while lawmakers and regulators are still debating basic standards protections. This gap between innovation and accountability is where the greatest risks lie.

Why People Turn to AI Anyway

It’s important to recognize why millions are turning to AI in the first place, and it’s partially because our current mental health system isn’t meeting their needs. Therapy remains the default, and it’s too often expensive, too hard to access, or buried in stigma. AI, on the other hand, is instant. It’s nonjudgmental. It feels private, even when it’s not. That accessibility is part of the opportunity, but also part of the danger.

To meet this demand responsibly, we need widely available, purpose-built AI for mental health – tools designed by clinicians, grounded in evidence, and transparent about their limits. For example, plain-language disclosures about what a tool is for and what it’s not. Is it for skill-building? For stress management? Or is it attempting to appear therapeutic? 

Responsible AI for mental health has to be more than helpful; it needs to be safe by providing clear usage boundaries, clinically informed scripting, and built-in protocols for escalation – not just endless empathy on demand.

Setting a Higher Standard

We’ve already lived through one digital experiment without clear standards. We know the cost of chasing attention over health. With AI, the standard has to be different.

AI holds real promise in supporting everyday mental health needs, and helping people manage stress, ease anxiety, process emotions, and prepare for difficult conversations – but its potential will only be realized if industry leaders, policymakers, and clinicians work together to establish guardrails from the start. Untreated mental health issues cost the U.S. an estimated $282 billion annually, while burnout costs employers thousands of dollars per employee each year. By prioritizing accountability, transparency, and user wellbeing, we have the opportunity to not just avoid repeating the mistakes of social media, but to build AI tools that strengthen resilience, reduce economic strain, and allow people to live healthier, connected lives.

Fortune Brainstorm AI returns to San Francisco Dec. 8–9 to convene the smartest people we know—technologists, entrepreneurs, Fortune Global 500 executives, investors, policymakers, and the brilliant minds in between—to explore and interrogate the most pressing questions about AI at another pivotal moment. Register here.
About the Author
By Jenna Glover
See full bioRight Arrow Button Icon

Latest in Commentary

Amit Walia
CommentaryM&A
Why the timing was right for Salesforce’s $8 billion acquisition of Informatica — and for the opportunities ahead
By Amit WaliaDecember 6, 2025
23 hours ago
Steve Milton is the CEO of Chain, a culinary-led pop-culture experience company founded by B.J. Novak and backed by Studio Ramsay Global.
CommentaryFood and drink
Affordability isn’t enough. Fast-casual restaurants need a fandom-first approach
By Steve MiltonDecember 5, 2025
2 days ago
Paul Atkins
CommentaryCorporate Governance
Turning public companies into private companies: the SEC’s retreat from transparency and accountability
By Andrew BeharDecember 5, 2025
2 days ago
Matt Rogers
CommentaryInfrastructure
I built the first iPhone with Steve Jobs. The AI industry is at risk of repeating an early smartphone mistake
By Matt RogersDecember 4, 2025
3 days ago
Jerome Powell
CommentaryFederal Reserve
Fed officials like the mystique of being seen as financial technocrats, but it’s time to demystify the central bank
By Alexander William SalterDecember 4, 2025
3 days ago
Rakesh Kumar
CommentarySemiconductors
China does not need Nvidia chips in the AI war — export controls only pushed it to build its own AI machine
By Rakesh KumarDecember 3, 2025
4 days ago

Most Popular

placeholder alt text
AI
Nvidia CEO says data centers take about 3 years to construct in the U.S., while in China 'they can build a hospital in a weekend'
By Nino PaoliDecember 6, 2025
19 hours ago
placeholder alt text
Real Estate
The 'Great Housing Reset' is coming: Income growth will outpace home-price growth in 2026, Redfin forecasts
By Nino PaoliDecember 6, 2025
1 day ago
placeholder alt text
Big Tech
Mark Zuckerberg rebranded Facebook for the metaverse. Four years and $70 billion in losses later, he’s moving on
By Eva RoytburgDecember 5, 2025
2 days ago
placeholder alt text
Economy
The most likely solution to the U.S. debt crisis is severe austerity triggered by a fiscal calamity, former White House economic adviser says
By Jason MaDecember 6, 2025
14 hours ago
placeholder alt text
Asia
Despite their ‘no limits’ friendship, Russia is paying a nearly 90% markup on sanctioned goods from China—compared with 9% from other countries
By Jason MaNovember 29, 2025
8 days ago
placeholder alt text
Success
Nvidia CEO Jensen Huang admits he works 7 days a week, including holidays, in a constant 'state of anxiety' out of fear of going bankrupt
By Jessica CoacciDecember 4, 2025
3 days ago
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Fortune Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.