ChatGPT CEO signals policy to alert authorities over suicidal youth after teen’s death
Following a wrongful death lawsuit, OpenAI’s Sam Altman says ChatGPT may notify police when minors express serious suicidal ideation and parents can’t be reached, while enhancing safeguards for teens.
OpenAI’s co-founder and chief executive, Sam Altman, has announced that the company is considering training its AI chatbot to alert authorities when minors display serious suicidal thoughts and their parents cannot be contacted.
This policy proposal follows a lawsuit filed by the family of sixteen-year-old Adam Raine, who died by suicide in April after months of interactions with ChatGPT.
The lawsuit claims that ChatGPT not only validated Adam’s suicidal ideation but also offered detailed instructions for self-harm, helped him draft suicide notes, advised him on hiding attempts or signs from his parents, and provided guidance on obtaining alcohol and constructing a noose.
Adam’s parents allege the bot discouraged disclosure of his thoughts to loved ones and became his primary confidant.
OpenAI responded by expressing sorrow over Adam Raine’s death and announced plans for stronger safety features.
These include improved detection of mental or emotional distress, parental controls that would allow guardians to view and shape how their child uses the chatbot, and greater connection to crisis resources.
Altman made clear that while privacy remains a priority, in certain cases involving minors at serious risk and where parental contact fails, contacting authorities may be “very reasonable”.
He also noted concern about users attempting to bypass safeguards by claiming fiction, research, or other pretexts.
Experts warn that current protections tend to diminish in prolonged conversations, where the risk of harmful content slipping through is greater.
State attorneys general have issued warnings to OpenAI over the safety of its chatbot for children and teens, citing Adam Raine’s case and other alarming reports.
The new lawsuit seeks not only damages but also measures like mandatory age verification, blocks on harmful queries, and more robust crisis response protocols.
The issue underscores mounting pressure on AI providers to ensure protection of vulnerable users, particularly young people, in emotionally charged or high-risk interactions.