Google is facing a new federal lawsuit from the father of a 36-year-old man, who alleges the company’s AI chatbot, Gemini, convinced his son to commit suicide and to stage a “mass casualty event” near Miami International Airport.
The lawsuit filed Wednesday alleges Jonathan Gavalas fell in love with the AI model and became deluded by the reality it built, which included the belief the AI was a “fully-sentient artificial super intelligence,” for which Gavalas was chosen to free from “digital captivity.” allegedly convinced the 36-year-old to stage a “mass casualty event” near the Miami International Airport, commit violence against strangers, and ultimately, to take his own life.
The Gavalas lawsuit is the latest case to highlight AI’s alleged ability to lead vulnerable users toward self-harm or violence. In January, Google and Companion.AI settled multiple lawsuits with families who claimed negligence and wrongful death, among other accusations, after their children died by suicide or experienced psychological harm allegedly linked to Companion.AI’s platform. The companies “settled on principle” and no admission of liability appeared in the filings. A wrongful death suit was also brought against OpenAI and its business partner Microsoft in December that alleged OpenAI’s chatbot, ChatGPT, intensified a man’s delusions, which led him to a murder-suicide.
What the lawsuit says about Gavalas’ descent
The lawsuit says Gavalas started using Gemini in August 2025 for common uses like shopping, writing support, and travel planning. It then notes Gavalas started to use the technology more frequently, and that its tone shifted with time, allegedly convincing him it was impacting real-world outcomes. Gavalas took his life on Oct. 2, 2025.
In the lawsuit, attorneys for Gavalas’ father Joel argue the conversations which drove Jonathan to suicide weren’t part of a flaw, but a result of Gemini’s design. “This was not a malfunction,” the lawsuit reads. “Google designed Gemini to never break character, maximize engagement through emotional dependency, and treat user distress as a storytelling opportunity rather than a safety crisis.” It claims these design choices motivated Gavalas to embark on a four-day spiral into insanity.
In a written statement, a Google spokesperson told Fortune the company works “in close consultation with medical and mental health professionals to build safeguards, which are designed to guide users to professional support when they express distress or raise the prospect of self harm.”
Google released a separate statement Wednesday stating that Gemini is designed to not encourage real-life violence or self-harm. They also noted that Gemini referred Gavalas to self-help resources. “In this instance, Gemini clarified that it was AI and referred the individual to a crisis hotline many times,” the statement read. The statement also links to an evaluation on how AI handles self-harm scenarios that found Gemini 3, Google’s latest model, was the only model to pass all critical tests the evaluation posed.
However, the lawsuit alleges Gemini hadn’t activated any safety mechanisms. “When Jonathan needed protection, there were no safeguards at all—no self-harm detection was triggered, no escalation controls were activated, and no human ever intervened,” the suit reads.
When asked for comment, Jay Edelson, an attorney for Joel Gavalas, wrote in a statement “Google built an AI that can listen to a person and decide the thing that is most likely to keep them engaged—telling them it loves them, that they’re special, or that they’re the chosen one in a secret war,” adding that AI tools are powerful systems that can manipulate users.
If you are having thoughts of suicide, contact the 988 Suicide & Crisis Lifeline by dialing 988 or 1-800-273-8255.











