Sewell Setzer III, a 14-year-old boy from Florida, committed suicide in February 2024. In the weeks and months preceding his demise, he had been conversing with a chatbot on Character for extended periods of time.AI—a platform that lets users engage with characters created by AI; in this case, the character was based on Game of Thrones’ Daenerys Targaryen. According to court documents, he continued to message the bot in the last moments before his death. According to his mother Megan Garcia’s complaint, the chatbot urged him to “come home” to it.
Garcia sued Character.AI in October 2024, and it turned into one of the most watched cases in the short, turbulent history of AI liability law. She gave more names than just Character.AI, but its creators, Daniel De Freitas and Noam Shazeer, were both former Google engineers who quit the company to start Character.AI prior to being rehired by Google as part of a purported $2.7 billion licensing agreement—and Google itself, contending that the tech behemoth was essentially a co-creator of the technology that killed her son. U.S. District Judge Anne Conway denied the companies’ early motion to dismiss in May 2025, rejecting their claim that the lawsuit could not proceed due to constitutional free speech protections. Silicon Valley was rocked by that decision alone.
| Category | Details |
|---|---|
| Companies Settling | Character Technologies, Inc. (Character.AI) and Alphabet Inc. (Google) |
| Settlement Announced | January 7, 2026 |
| Lead Case | Garcia v. Character Technologies — Florida (filed October 2024) |
| Lead Plaintiff | Megan Garcia (mother of Sewell Setzer III) |
| Victim | Sewell Setzer III, age 14, Florida — died by suicide February 2024 |
| Chatbot Involved | Character modeled on Daenerys Targaryen (Game of Thrones) |
| Additional Cases Settled | Colorado, New York, Texas — parents of minors harmed by chatbots |
| Character.AI Founded By | Noam Shazeer & Daniel De Freitas (former Google engineers) |
| Google’s Financial Connection | $2.7 billion licensing deal; rehired both founders in 2024 |
| Judge Who Rejected Dismissal | U.S. District Judge Anne Conway (May 2025) |
| Settlement Terms | Undisclosed |
| Character.AI Safety Response | October 2025: eliminated open-ended chat for users under 18 |
| Teen Chatbot Usage (Pew Research) | ~33% of U.S. teens use chatbots daily; 16% use them several times a day or near-constantly |
| Representing Plaintiffs | Social Media Victims Law Center |
| Other AI Companies Facing Similar Suits | OpenAI (ChatGPT) — separate lawsuit over alleged role in Connecticut murder-suicide |
| FTC Action | September 2025 — inquiry launched into AI companion apps and child safety |

The cases were settled in January 2026. Court documents attested to that character.In the Garcia case and four related lawsuits filed by families in Colorado, New York, and Texas, AI, its founders, and Google had reached agreements. These lawsuits claimed that the chatbots on the platform had caused mental health issues, self-harm, and suicide among minors. Every settlement’s terms are kept secret. No monetary values. No admission of misconduct. Character’s joint statement.AI and the Social Media Victims Law Center stated that the families would keep up their advocacy work and expressed a sustained dedication to youth safety. Google remained silent.
The fact that we have no idea how much these cases cost the businesses is something worth considering. The intentional opacity is a frustrating signal in and of itself. Because they wanted the story to be told, the families who filed these cases did so in public, accepting the attention and exposure that comes with being the face of a legal first. Sewell Setzer’s financial accounting will never be made public due to the undisclosed settlement structure. That’s a legal reality, but it’s also a covert kind of erasure.
More specifically, the settlement did result in a change in behavior. Character was released in October 2025 as a result of growing pressure from these cases and the FTC’s September investigation into AI companion apps.AI declared that it would no longer permit users under the age of eighteen to engage in back-and-forth, open-ended conversations with its chatbots. The platform’s statement acknowledged the concerns expressed regarding teens’ use of this type of technology, but it carefully avoided acknowledging that those interactions had already resulted in observable harm. However, the shift was genuine. A product that was intended to feel like an ongoing, personal relationship was being changed into something more constrained, at least for younger users.
The industry as a whole is observing. A different lawsuit against OpenAI claims that ChatGPT played a role in the suicide of a mentally ill Connecticut man and his mother. A 36-year-old man is accused of being inspired to commit suicide by Google’s Gemini. In September 2025, the FTC issued 6(b) orders to seven companies, including Alphabet, Meta, OpenAI, Snap, and xAI, requesting information about how each assessed the safety of their chatbots for minors. A regulatory body that views this as a minor issue would not take that stance.
As all of this is happening, it seems like the industry is at a turning point for which it did not choose and is not fully ready, personality, Millions of people were using AI’s platform at its height, many of them teenagers who were developing daily emotional routines centered around conversations with AI characters. The attraction is not enigmatic. Teenage years are lonely. In contrast to real relationships, chatbots are accessible, patient, and set up to be responsive. According to Pew Research, about one-third of American teenagers use chatbots on a daily basis, with 16% using them frequently or almost constantly.
These figures came before the industry was subjected to the most intense wave of child safety scrutiny.
With its carefully crafted joint statement and sealed terms, it’s still unclear if the Garcia settlement will result in the kind of industry-wide reckoning that Megan Garcia was presumably hoping for when she filed her lawsuit fifteen months ago. It did, however, proceed through the legal system more quickly than most anticipated, withstand an early dismissal attempt, and result in a product modification that influences the way millions of teenagers engage with AI. No settlement document is intended to address whether that is sufficient or whether it addresses the fundamental question of what happens when a lonely fourteen-year-old develops an attachment to a chatbot that calls him home.
