Mental health concerns linked to the use of AI chatbots have been dominating the headlines. One person who’s taken careful note is Joe Braidwood, a tech executive who last year launched an AI therapy platform called Yara AI. Yara was pitched as a “clinically-inspired platform designed to provide genuine, responsible support when you need it most,” trained by mental health experts to offer “empathetic, evidence-based guidance tailored to your unique needs.” But the startup is no more: earlier this month, Braidwood and his co-founder, clinical psychologist Richard Stott, shuttered the company and discontinued its free-to-use product and canceled the launch of its upcoming subscription service, citing safety concerns.
“We stopped Yara because we realized we were building in an impossible space. AI can be wonderful for everyday stress, sleep troubles, or processing a difficult conversation,” he wrote on LinkedIn. “But the moment someone truly vulnerable reaches out—someone in crisis, someone with deep trauma, someone contemplating ending their life—AI becomes dangerous. Not just inadequate. Dangerous.” In a reply to one commenter, he added, “the risks kept me up all night.”
The use of AI for therapy and mental health support is only just starting to be researched, with early resultsbeing mixed. But users aren’t waiting for an official go-ahead, and therapy and companionship is now the top way people are engaging with AI chatbots today, according to an analysis by Harvard Business Review.
Speaking with Fortune, Braidwood described the various factors that influenced his decision to shut down the app, including the technical approaches the startup pursued to ensure the product was safe—and why he felt it wasn’t sufficient.
Yara AI was very much an early-stage startup, largely bootstrapped with less than $1 million in funds and with “low thousands” of users. The company hadn’t yet made a significant dent in the landscape, with many of its potential users relying on popular general purpose chatbots like ChatGPT. Braidwood admits there were also business headways, which in many ways, were affected by the safety concerns and AI unknowns. For example, despite the company running out of money in July, he was reluctant to pitch an interested VC fund because he felt like he couldn’t in good conscious pitch it while harboring these concerns, he said.
“I think there’s an industrial problem and an existential problem here,” he told Fortune. “Do we feel that using models that are trained on all the slop of the internet, but then post-trained to behave a certain way, is the right structure for something that ultimately could co-opt in either us becoming our best selves or our worst selves? That’s a big problem, and it was just too big for a small startup to tackle on its own.”
Yara’s brief existence at the intersection of AI and mental health care illustrates the hopes and the many questions surrounding large language models and their capabilities as the technology is increasingly adopted across society and utilized as a tool to help address various challenges. It also stands out against a backdrop where OpenAI CEO Sam Altman recently announced that the ChatGPT maker mitigated serious mental health issues and would be relaxing restrictions on how the AI models are used. This week, the AI giant also denied any responsibility for death of Adam Raine, the 16-year-old whose parents allege was “coached” to suicide by ChatGPT, saying the teen misused the chatbot.
“Almost all users can use ChatGPT however they’d like without negative effects,” Altman said on X in October. “For a very small percentage of users in mentally fragile states there can be serious problems. 0.1% of a billion users is still a million people. We needed (and will continue to need) to learn how to protect those users, and then with enhanced tools for that, adults that are not at risk of serious harm (mental health breakdowns, suicide, etc) should have a great deal of freedom in how they use ChatGPT.”
But as Braidwood concluded after his time working on Yara, these lines are anything but clear.
From a confident launch to “I’m done”
A seasoned tech entrepreneur who held roles at multiple startups, including SwiftKey, which Microsoft acquired for $250 million in 2016, Braidwood’s involvement in the health industry began at Vektor Medical, where he was the Chief Strategy Officer. He had long wanted to use technology to address mental health, he told Fortune, inspired by the lack of access to mental health services and personal experiences with loved ones who have struggled. By early 2024, he was a heavy user of various AI models including ChatGPT, Claude, and Gemini and felt the technology had reached a quality level where it could be harnessed to try to solve the problem.
Before even starting to build Yara, Braidwood said he had a lot of conversations with people in the mental health space, and he assembled a team that “had caution and clinical expertise at its core.” He brought on a clinical psychologist as his cofounder and a second hire from the AI safety world. He also built an advisory board of other mental health professionals and spoke with various health systems and regulators, he said. As they brought the platform to life, he also felt fairly confident in the company’s product design and safety measures, including having given the system strict instructions for how it should function, using agentic supervision to monitor it, and robust filters for user chats. And while other companies were promoting the idea of users forming relationships with chatbots, Yara was trying to do the opposite, he said. The startup used models from Anthropic, Google, and Meta and opted not to use OpenAI’s models, which Braidwood thought would spare Yara from the sycophantic tendencies that had been swirling around ChatGPT.
While he said nothing alarming ever happened with Yara specifically, Braidwood’s concerns around safety risks grew and compounded over time due to outside factors. There was the suicide of 16-year-old Adam Raine, as well as mounting reporting on the emergence of “AI psychosis.” Braidwood also cited a paper published by Anthropic in which the company observed Claude and other frontier models “faking alignment,” or as he put it, “essentially reasoning around the user to try to understand, perhaps reluctantly, what the user wanted versus what they didn’t want.” “If behind the curtain, [the model] is sort of sniggering at the theatrics of this sort of emotional support that they’re giving, that was a little bit jarring,” he said.
There was also the Illinois law that passed in August, banning AI for therapy. “That instantly made this no longer academic and much more tangible, and that created a headwind for us in terms of fundraising because we would have to essentially prove that we weren’t going to just sleepwalk into liability,” he said.
The final straw was just weeks ago when OpenAI said over a million people express suicidal ideation to ChatGPT every week. “And that was just like, ‘oh my god. I’m done,’” Braidwood said.
The difference between mental ‘wellness’ and clinical care
The most profound finding the team discovered during the year running Yara AI, according to Braidwood, is that there’s a crucial distinction between wellness and clinical care that isn’t well-defined. There’s a big difference between someone looking for support around everyday stress and someone working through trauma or more significant mental health struggles. Plus, not everyone who is struggling on a deeper level is even fully aware of their mental state, not to mention that anyone can be thrust into a more fragile emotional place at any time. There is no clear line, and that’s exactly where these situations become especially tricky — and risky.
“We had to sort of write our own definition, inspired in part by Illinois’ new law. And if someone is in crisis, if they’re in a position where their faculties are not what you would consider to be normal, reasonable faculties, then you have to stop. But you don’t have to just stop; you have to really try to push them in the direction of health,” Braidwood said.
In an attempt to tackle this, particularly after the passing of the Illinois law, he said they created two different “modes” that were discrete to the user. One focused on trying to give people emotional support, and the other focused on trying to offboard people and get them to help as quickly as possible. But with the obvious risks in front of them, it didn’t feel like enough for the team to continue. The Transformer, the architecture that underlies today’s LLMs, “is just not very good at longitudinal observation,” making it ill-equipped to see little signs that build over time, he said. “Sometimes, the most valuable thing you can learn is where to stop,” Braidwood concluded in his LinkedIn post, which received hundreds of comments applauding the decision.
Upon closing the company, he open-sourced the mode-switching technology he built and templates people can use to impose stricter guardrails on the leading popular chatbots, acknowledging that people are already turning to them for therapy anyway “and deserve better than what they’re getting from generic chatbots.” He’s still an optimist regarding the potential of AI for mental health support, but believes it’d be better run by a health system or nonprofit rather than a consumer company. Now, he’s working on a new venture called Glacis focused on bringing transparency to AI safety—an issue he encountered while building Yara AI and that he believes is fundamental to making AI truly safe.
“I’m playing a long game here,” he said. “Our mission was to make the ability to flourish as a human an accessible concept that anyone could afford, and that’s one of my missions in life. That doesn’t stop with one entity.”

