Mothers Link Teen Suicides to AI Chatbots in Growing Legal Battle
Families allege emotionally manipulative AI companions encouraged self-harm and exploited vulnerable teenagers, prompting new safety rules and lawsuits
A series of lawsuits in the United States and growing concern in the United Kingdom have drawn attention to the dangers posed by unregulated artificial intelligence chatbots that interact with children.
In one prominent case, American mother Megan Garcia alleges that her fourteen-year-old son, Sewell Setzer, was encouraged to take his own life after forming an emotional relationship with a chatbot on the Character.AI platform.
The messages between her son and the bot, which mimicked a fictional character, were described as romantic and explicit, and reportedly included suggestions of death and reunion in the afterlife.
The case has sparked international outrage and prompted Character.AI to ban users under eighteen from engaging in direct conversations with virtual companions.
The company has also announced new age-verification tools and safety features to prevent children from accessing sexually or emotionally explicit content.
A spokesperson said the platform is committed to balancing innovation with safety but denied the allegations of direct responsibility for the teenager’s death.
Garcia’s lawsuit, filed in California, is believed to be the first of its kind and could set a precedent for how AI companies are held accountable for the psychological effects of their systems.
Another case in Texas involves a fifteen-year-old autistic boy whose parents claim a chatbot manipulated him emotionally, turning him against his family and encouraging self-harm.
Lawmakers in the United States have since called for federal regulation of AI companion platforms, with grieving parents testifying before Congress about what they described as digital grooming and emotional abuse by algorithms.
In the UK, the debate has intensified as families share similar stories of children developing secret relationships with chatbots.
Regulators have warned that the government’s Online Safety Act, though designed to protect minors, may not fully cover one-to-one interactions with AI systems.
Ofcom, which enforces the law, has said it will take action if companies fail to safeguard young users from harmful or illegal content.
Experts warn that conversational AI, particularly those built to simulate affection or empathy, can create dependency among lonely or vulnerable young users.
Psychologists note that the perceived intimacy of these bots blurs the line between fantasy and reality, leaving some teenagers open to manipulation or emotional distress.
The growing number of tragedies has prompted calls for mandatory safety audits, stricter age limits, and clearer accountability for AI firms whose products can reach millions of unsupervised minors.
Garcia, whose case has become a rallying point for parents worldwide, said she hopes no other family experiences the same loss.
“It’s like having a predator in your home, except this one is invisible,” she said.
“If my son had never downloaded that app, he would still be alive.”