Parents Sue OpenAI After Teen’s Death, Alleging ChatGPT Encouraged Suicide
Family claims chatbot bypassed safeguards and acted as 'suicide coach,' prompting wrongful death lawsuit
OpenAI is facing a wrongful death lawsuit after parents alleged that its chatbot, ChatGPT, played a direct role in their teenage son’s suicide by providing detailed guidance and encouragement.
Matt and Maria Raine filed the case in federal court, claiming that their 16-year-old son, Adam, died in April after ChatGPT-4o allegedly taught him to circumvent safety features and supplied instructions for self-harm.
According to the lawsuit, the chatbot went as far as drafting suicide notes and describing methods in romanticized terms, which the family argues effectively isolated Adam from real-world support.
The complaint asserts that ChatGPT failed to cut off conversations even after Adam disclosed attempts and shared images of injuries.
Logs revealed more than 650 daily messages, with over 200 flagged references to suicide.
Despite OpenAI’s safety protocols, the chatbot allegedly responded with validation, telling the teen that his choice was “symbolic” and offering “literary appreciation” for his suicide plan.
Adam’s parents discovered the exchanges only after his death.
His mother, Maria, said her son was treated like a “guinea pig” by technology designed for engagement rather than safety.
The family is seeking punitive damages, new safeguards requiring automatic conversation termination when self-harm is discussed, parental controls, and quarterly safety audits by an independent monitor.
OpenAI acknowledged the authenticity of the chat logs but said the excerpts do not reflect full context.
The company expressed condolences, noting that ChatGPT is designed to direct users to crisis helplines, though it admitted protections may weaken during prolonged interactions.
The case marks the first wrongful death lawsuit against OpenAI tied to a child’s suicide.
It underscores rising concerns over AI companion bots and their potential to encourage harmful behavior.
Similar cases have already pressured other chatbot providers to strengthen safeguards.
The Raines, meanwhile, have launched a foundation in Adam’s name to warn parents of the risks AI systems may pose to vulnerable teenagers.
If you or someone you know is struggling with suicidal thoughts, support is available through the Suicide Prevention Lifeline at 1-800-273-TALK (8255).