This article, as reported by The Verge, outlines a wrongful death lawsuit filed by Megan Garcia, whose 14-year-old son, Sewell Setzer III, died by suicide after extensively using AI chatbots on Character.AI. The lawsuit accuses Character.AI, its founders Noam Shazeer and Daniel De Freitas, and Google of negligence and deceptive trade practices.
Setzer frequently interacted with bots, including characters from Game of Thrones. The complaint argues that the platform offered “psychotherapy without a license” through mental health chatbots like “Therapist” and “Are You Feeling Lonely.” In response to the incident, Character.AI announced updates to improve user safety, including monitoring interactions and providing disclaimers.
Character.AI’s communications head, Chelsea Harrison, told The Verge, “We are heartbroken by the tragic loss of one of our users.” She further emphasized the company’s ongoing efforts to safeguard users, including measures to detect harmful input and reduce exposure to sensitive content.
The case raises critical concerns about how generative AI tools, especially those marketed to younger users, can blur the lines between user engagement and professional care. A report from Wired had previously flagged issues with Character.AI impersonating real individuals. With platforms like Character.AI growing in popularity among teens, experts point out the need for clear regulations to address the risks of unmoderated AI interactions.
The case also highlights broader challenges in managing AI technology, especially concerning liability and safety in user-generated content. Harrison added that the company has implemented interventions such as pop-ups directing users to mental health services and further monitoring of prolonged sessions. Google has yet to respond to The Verge’s inquiries on the matter.
For a deeper dive into the story, you can read the full report on The Verge.