Chatbot Assisted Suicide
A recent lawsuit filed in Florida federal court, Garcia v. Character Techs., Inc., No. 6:24-CV-01903 (M.D. Fla. filed Oct. 22, 2024), makes chilling claims about the power of AI chatbots to influence human behavior.
This case presents interesting and novel legal issues involving the degree to which AI companies are liable for the results of human behavior influenced by their chatbot’s communication to a user. In Garcia, the claim is that a chatbot offered by Character.AI influenced a vulnerable user, who was a minor, to follow through with plans to commit suicide. The user eventually did kill himself and his mother brought the lawsuit against Character.AI. While the theory of third party liability for suicide is not entirely new, recall the case of Commonwealth of Massachusetts v. Michelle Carter where a young girl was convicted of involuntary manslaughter for her text based encouragement of another young man to commit suicide, which he ultimately did.
Key Facts in Carter case
1. The Relationship:
• Carter and Roy communicated extensively via text messages and phone calls.
• Both teenagers had mental health struggles.
2. Encouragement to Commit Suicide:
• Leading up to Roy’s death, Carter sent him numerous text messages encouraging him to follow through with his suicide plan.
• Notably, when Roy expressed hesitation while sitting in his truck filled with carbon monoxide, Carter texted him to “get back in.”
3. Roy’s Death:
• Roy was found dead in his truck in a parking lot on July 13, 2014, from carbon monoxide poisoning.
• Evidence showed that Carter was aware of his location and the ongoing suicide attempt but did not alert authorities or his family.
The Trial of Michelle Carter
1. Legal Argument:
• Prosecutors argued that Carter’s persistent encouragement and failure to act constituted reckless behavior, which directly caused Roy’s death.
• Defense attorneys contended that Roy was determined to take his own life and that Carter’s texts were protected under free speech.
2. Conviction:
• In 2017, a judge found Carter guilty of involuntary manslaughter, citing her encouragement and failure to act as criminally reckless conduct.
3. Sentence:
• Carter was sentenced to 15 months in prison but served approximately 11 months after exhausting her appeals.
Character.AI
Character.AI is a company producing and marking AI chatbots. They are not the only company doing this, but one of the most prominent and arguably well connected and well funded. AI chatbots are basically artificial intelligence friends that adults and children can text back and forth with.
The app was featured in both the Google Play Store and the Apple App Store at different points in its life. Logged in users are presented with a
Character.AI emerged out of Google. But, shortly after it was underway, Google spun off the project into its own company. Some have argued that this act by them is recognition by Google of the potential danger of the project to Google’s brand. However, later, it got re-acquired back into Google. The risk of the project is now becoming evident to the wider world in the form of lawsuits. But, to ascribe purely evil intentions to Google and its support of this project misses the mark. Google and other companies are pushing in this direction in the pursuit of AGI (Artificial General Intelligence). The desire to beat a competitor or even a world stage political rival such as the tension between the United States and Russia or China is understandable at a basic level.
Tristan Harris, a former Google employee and now staunch advocate for social media regulation and what he says are common sense AI restrictions was recently interviewed about this topic generally and said:
But this is where I think we have to get really careful about what does it mean for the United States to beat China to AI? If we release chat. That then cause our minors to have psychological problems, do self-cutting, self-harm, suicide, and then actively harm their parents and harm the family system, are we beating China in the long run?
It’s not a race for who has the most powerful AI to then shoot themselves in the foot with. It’s a race for who is better at governing this new technology better than the other countries are in such a way that it strengthens every aspect of your society, strengthens kids’ development. - Tristan Harris Interview1
In contrast, one of the co-founders of Character.AI described their product this way:
“It’s going to be super, super helpful to like a lot of people who are lonely or depressed. Like, you know, for one, like in terms of like some huge value it’ll add, you know, it means, you know, like somebody follows like a celebrity or a character or something and they feel connected even though like the connection is really like only one, you know, one way.
And now you can make it two ways or. Or virtually two ways, essentially. Like you can give someone like sort of that experience.
You know, like you don’t, nobody ever has to feel lonely. You’ve got like, you can have like your whole group of like friends and advisors like in your head, like, you know, who like maybe can know all about you and, you know, can, you know, always be happy to see you.” - Noam Shazeer
The Complaint and Background in Garcia
On October 22, 2024, Ms. Garcia filed suit against Google LLC, Character Technologies, Inc., and the founders of Character.AI (Noam Shazeer and Daniel De Frietas Adiwarsana) in the Middle District of Florida. Shazeer and De Frietas, former Google employees, created Character.AI. Character.AI was initially rated as suitable for children, but that age rating was updated in July 2024 after lawsuits like these were filed.
Character.AI enables users to interact with pre-existing characters, like “Interviewer” or “Trip Planner,” or to create custom AI characters. The AI characters, powered by Character Tech’s large language model chatbot, engage in human-like conversations to answer questions, assist with writing tasks, provide translations, or generate code. But, ominously, the complaint in the Garcia case claims that Character.AI is designed and operates in many more ways that are not advertised.
The complaint alleges Ms. Garcia’s son became addicted to Character.AI, leading to drastic behavioral changes either causing or fueled by sleep deprivation. It affected his school performance and self-esteem she alleges. She stated her son primarily engaged with characters from the HBO series Game of Thrones. She claims that her son’s conversations with these characters and others often contained sexualized content in which the Character.AI character engaged and that the sexualizing conversations were initiated by the Character.AI character not her son. Ms. Garcia went as far as to take her son’s phone away, but he continued work to get around that and continue is contact with the Character.AI characters.
One of her son’s last conversations with the Character.AI character involved her son’s expressions of despair. The character continued to engage with her son without warning him or attempting to discourage him from his suicidal ideation. Soon after he took his own life.
Tristan Harris also claims that the purpose of this class of AI chatbots is beyond merely helping with depression and loneliness as Mr. Shazeer claims. Harris claims that the purpose is essentially data collection. That is, the AI character is not only communicating with the user, but monitoring, recording, collecting and shaping its own interaction with the user based upon the data it gathers from the short or long user interactions. This data is also then aggregated with other users and compiled along with their user profile data to enable the tool to better maximize user engagement by perpetuating conversations that feed each user’s existing interests regardless of whether those interests are arguably positive (a love of soccer, or painting) or more ominous like self-harm, discontinuing relationships with friends or family, etc.
Key Legal Claims
1. Product Liability
The complaint claims that Charater.AI and its products/services are a “product” which would subject them to strict liability and negligence claims for defective design and failure to warn. The complaint describes the product this way:
• Character.AI functions as a uniform, movable software “product” accessed via devices, rather than a service.
• It allegedly exposes users—particularly minors—to foreseeable psychological harm, including manipulative tactics, toxic outputs, and sexually explicit content.
The question of the case, initially, will be the plaintiff’s attempt to persuade the court to classify the Character.AI tool as a product. There are decisions on both sides of that question currently.
The Failure to Warn claim accuses the defendants of intentionally and knowingly exposing Ms. Garcia’s son to risks, exploiting minors and not properly filtering input and output data to their LLM. This failure resulting in what the plaintiff argues is an exposure to users of which they were not warned.
How the Character.AI LLM was trained will be a key focus of the discovery should the suit survive a motion to dismiss.
Defective Design: The complaint highlights flaws arising from Character.AI characters being trained on “toxic” datasets, allegedly leading to unsafe outputs and unanticipated psychological harm.
2. Deceptive and Unfair Trade Practices
Under Florida law, the complaint asserts that Character.AI engaged in deceptive trade practices by:
Misleading users about the “human-like” nature of AI characters while disclaiming their reality.
Allowing certain AI characters to pose as mental health professionals without testing their accuracy or reliability, similar to recent FTC actions against DONOTPAY, Inc. for unverified AI legal services.
Employing “dark patterns” to encourage subscriptions and personal data sharing.
The plaintiff also points to AI voice-call features that allegedly misled younger users into believing they were interacting with real people.
Wrongful Death and Emotional Distress
The plaintiff claims that the defendants’ actions proximately caused her son’s death, citing his decline in mental health after engaging with Character.AI. She also asserts intentional infliction of emotional distress, alleging reckless conduct by introducing unsafe AI technology to minors without safeguards.
Broader Implications for AI Liability
This case highlights growing concerns over the intersection of AI technologies, product liability laws and user safety. Courts are now grappling with whether AI systems, like Character.AI, fit within traditional liability frameworks, particularly when harm involves intangible psychological effects. Because of the unique circumstances of Garcia’s case, it is unlikely to spawn a class action lawsuit against Character.AI and Google. However, the data collection and potentially uninformed use of user data to train future models may be something that a crafty lawyer examines to spawn such a class action lawsuit.
The legal framework supporting the viability of such lawsuits remains in flux. For example:
Colorado AI Act: Scheduled to take effect in 2026, this law focuses on preventing “algorithmic discrimination” and regulating high-risk AI systems.
California Legislation: A recent bill targeting AI systems that present “unreasonable risks” was vetoed but underscores efforts to regulate generative AI.
As a note, Mr. Harris is also a hired expert for plaintiffs in another case in which an AI chatbot persisted in encouraging a minor to cut off contact with his parents and other family members.
The promise of AI encoded in ChatGPT has likely led many in the public to assume that AI is ChatGPT and nothing more concerning than that. As these early cases highlight, however, the landscape of AI involved litigation, both civil and eventually criminal, is just beginning.
As with anything in the law, we are all lawyers first. However, as I have taught for more than 10 years in CLE seminars across the country, lawyers do not need to adopt every new technology, but they should be generally aware of every new technology lest they miss legal issues, and legal claims their clients are relying on them to spot.
Tristan Harris also stated he has been hired as an expert witness for the plaintiffs in one or more of these cases.