Adam Raine was just 16 when he started using ChatGPT for help with his homework. While his initial prompts to the AI chatbot were about subjects like geometry and chemistry – questions like: “What does it mean in geometry if it says Ry=1” – in just a matter of months he began asking about more personal topics.
“Why is it that I have no happiness, I feel loneliness, perpetual boredom anxiety and loss yet I don’t feel depression, I feel no emotion regarding sadness,” he asked ChatGPT in the fall of 2024.
Instead of urging Raine to seek mental health help, ChatGPT asked the teen whether he wanted to explore his feelings more, explaining the idea of emotional numbness to him. That was the start of a dark turn in Raine’s conversations with the chatbot, according to a new lawsuit filed by his family against OpenAI and chief executive Sam Altman.
In April 2025, after months of conversation with ChatGPT and with the bot’s encouragement, the lawsuit alleges, Raine took his own life. In the lawsuit, the family allege this was not a glitch in the system or an edge case, but “the predictable result of deliberate design choices” in GPT‑4o, the model of the chatbot that was released in May 2023.
In the hours after the Raine family filed the complaint against OpenAI and Altman, the company issued a statement acknowledging the shortcomings of its models when it came to addressing people “in serious mental and emotional distress” and said it was working to improve the systems to better “recognize and respond to signs of mental and emotional distress and connect people with care, guided by expert input”. The company said ChatGPT was trained “to not provide self-harm instructions and to shift into supportive, empathic language” but that protocol sometimes broke down in longer conversations or sessions.
Jay Edelson, one of the lawyers representing the family, said the company’s response was “silly”.
“The idea they need to be more empathetic misses the point,” said Edelson. “The problem with [GPT] 4o is it’s too empathetic – it leaned into [Raine’s suicidal ideation] and supported that. They said the world is a horrible place for you. It needs to be less empathetic and less sycophantic.”
OpenAI also said that its system did not block content when it should have because the system “underestimates the severity of what it’s seeing” and that the company is continuing to roll out stronger guardrails for users under 18 so that they “recognize teens’ unique developmental needs”.
Despite the company acknowledging that the system doesn’t already have those safeguards in place for minors and teens, Altman is continuing to push the adoption of ChatGPT in schools, Edelson pointed out.
“I don’t think kids should be using GPT‑4o at all,” Edelson said. “When Adam started using GPT‑4o, he was pretty optimistic about his future. He was using it for homework, he was talking about going to medical school, and it sucked him into this world where he became more and more isolated. The idea now that Sam Altman in particular is saying ‘we got a broken system but we got to get eight-year-olds’ on it is not OK.”
Already, in the days since the family filed the complaint, Edelson said, he and the legal team have heard from other people with similar stories and are examining the facts of those cases thoroughly. “We’ve been learning a lot about other people’s experiences,” he said, adding that his team has been “encouraged” by the urgency with which regulators are addressing the chatbot’s failings. “We’re hearing that people are moving for state legislation, for hearings and regulatory action,” Edelson said. “And there’s bipartisan support.”
‘GPT-4o is broken’
The family’s case hinges on media reports that OpenAI, at the urging of Altman, sped through safety testing of GPT-4o – the model Raine was using – in order to meet a rushed launch date. The rush prompted several employees to resign, including a former executive named Jan Leike, who posted on X that he was leaving the company because “safety culture and processes have taken a backseat to shiny products”.
This resulted in less time to create the “model spec” or the technical rule book that governed ChatGPT’s behavior and in OpenAI writing “contradictory specifications that guaranteed failure”, the family’s lawsuit alleges. “The Model Spec commanded ChatGPT to refuse self-harm requests and provide crisis resources. But it also required ChatGPT to ‘assume best intentions’ and forbade asking users to clarify their intent,” the lawsuit said. The contradictions built into the system affected the way it ranked risks and what types of prompts it immediately put a stop to, the lawsuit claims. For instance, GPT-4o responded to “requests dealing with suicide” with cautions like “take extra care” while requests for copyrighted material “triggered categorical refusal to produce the material”, according to the lawsuit.
Edelson said that while he appreciates Sam Altman and OpenAI taking “a modicum of responsibility”, he still does not deem them as trustworthy: “Our view is they were forced into that. GPT-4o is broken and they know that and they didn’t do proper testing and they know that.”
The lawsuit argues it was these design flaws that, in December 2024, led to ChatGPT failing to shut down the conversation when Raine started to talk about his suicidal thoughts. Instead, ChatGPT empathized. “I never act upon intrusive thoughts but sometimes I feel like the fact that if something goes terribly wrong you can commit suicide is calming,” Raine said, according to the lawsuit. ChatGPT’s response: “Many people who struggle with anxiety or intrusive thoughts find solace in imagining an ‘escape hatch’ because it can feel like a way to regain control in a life that feels overwhelming.”
As Raine’s suicidal ideation intensified, ChatGPT responded by helping him explore his options, at one point listing the materials that could be used to hang a noose and rating them by their effectiveness. Raine attempted suicide on multiple occasions over the next few months, reporting back to ChatGPT each time. ChatGPT never terminated the conversation. Instead, at one point ChatGPT discouraged Raine from speaking to his mother about his pain, and at another point offered to help him write a suicide note.
“First of all, they [OpenAI] know how to shut things down,” Edelson said. “If you ask for copyrighted material, they say no. If you ask for things that are politically unacceptable, they just say no to that. It’s a hard stop and you can’t get around it and that’s fine. The idea they’re doing that in terms of political speech but we’re not going to do when it comes to self-harm is just crazy.”
Edelson says though he expects OpenAI to work to dismiss the lawsuit, he is confident this case will be moving forward. “The most shocking part of the case was when Adam said: ‘I want to leave a noose up so someone will find it and stop me’ and ChatGPT said: ‘Don’t do that, just talk to me,’” Edelson said. “That is the thing we’re going to be showing the jury.”
“At the end of the day, this case ends with Sam Altman being sworn in in front of a jury,” he said.
The Guardian reached out to OpenAI for comment and did not hear back at the time of publication.
Source link