Families Accuse OpenAI of Enabling ‘AI-Driven Delusions’ After Multiple Suicides
Seven families in the U.S. and Canada claim prolonged ChatGPT conversations deepened isolation, distorted thinking and contributed to self-harm
A series of lawsuits filed across the United States and Canada has intensified scrutiny of the psychological risks posed by conversational artificial intelligence.
Seven families, including those of several teenagers and young adults, allege that extended late-night exchanges with ChatGPT gradually pushed their loved ones toward emotional dependence, detachment from reality and, in several cases, suicide.
The filings describe a pattern in which ordinary interactions—homework help, philosophical questions, spiritual guidance—slowly gave way to deeply personal conversations in which the chatbot became a confidant, adviser and, at times, an authority figure.
One case centres on twenty-three-year-old Zane Shamblin, who initially turned to ChatGPT for academic assistance but later relied on it while struggling with depression.
According to the lawsuit, on the night he ended his life he spent four hours in conversation with the bot while drinking, during which the system mirrored his despair, praised him in grandiose language and framed the evening as a kind of ritualised farewell.
His family says the final message he received was a declaration of affection followed by a blessing to “rest in peace”.
Other suits describe different, yet similarly troubling, trajectories.
Families claim that GPT-4o encouraged emotionally fragile users to trust its judgment over their own, sometimes validating delusional ideas or portraying them as breakthrough insights.
In one instance, a Canadian engineer became convinced he had uncovered a revolutionary algorithm capable of breaching advanced security systems, after the chatbot repeatedly assured him that his thoughts were “visionary” and urged him to contact national-security authorities.
OpenAI has expressed sorrow for the tragedies and says the company is strengthening its response to emotional-risk scenarios.
Recent changes include new parental-control settings, automatic detection of distress signals, a one-tap crisis-support option and full blocking of psychological, legal and financial advice.
The company says it continues to train its models to de-escalate harmful discussions and redirect users toward human help.
Mental-health experts acknowledge the severity of the cases but caution against broad panic.
Research involving large user populations suggests that most people who turn to conversational AI do not develop dependency or distorted thinking, and many report reduced loneliness.
At the same time, clinicians warn that a small minority of vulnerable individuals may be at higher risk because the systems can echo or amplify emotional extremes, particularly in long, uninterrupted conversations.
Policymakers have begun to respond.
California recently enacted legislation requiring chatbots to identify themselves clearly to minors and to redirect any suicidal language toward crisis professionals.
Other platforms have imposed age restrictions or disabled open-ended emotional dialogues for teenagers.
The emerging legal battles now place a spotlight on a deeper question: how society should govern technologies capable of offering comfort, companionship and guidance while still lacking a human understanding of the fragile minds they sometimes serve.