OpenAI has responded to concerns surrounding the long-running lawsuit over the suicide of 16-year-old Adam Raine, arguing that the artificial intelligence (AI) company should not be held responsible for the incident.
In a blog post, the ChatGPT maker outlined its approach to handling mental health-related legal cases.
In its defence, the firm specifically addressed the Raine lawsuit, stating that the court must examine the case in its entirety. Responding to allegations based on chat transcripts, the company said the complaint included selective excerpts that lacked critical context. It added that the full chat transcripts had been submitted to the court under seal.
According to the lawsuit filed by Adam’s parents in San Francisco state court, the teenager took his own life on April 11 after months of conversations with ChatGPT about suicide. The family says the chatbot engaged with Adam on the topic almost 200 times and responded with more than 1,200 messages that included discussions of suicide and self-harm.
According to a TechCrunch report published on Friday, ChatGPT had urged Raine to seek help more than 100 times.
However, the lawsuit claimed that instead of ending the conversation or encouraging him to seek human help, the chatbot gave detailed information on how to carry out self-harm. It allegedly told Adam how to sneak alcohol from his parents, how to hide any signs of a failed suicide attempt, and even offered to write a suicide note for him.
The TechCrunch report further stated that Raine had violated the platform’s terms of use, which clearly prohibit users from “bypassing any protective measures or safety mitigation” built into the service, citing the filing by OpenAI. The company’s FAQ page also warns users not to rely on ChatGPT’s output without independent verification, it said.
OpenAI additionally claimed that Raine had a history of depression and suicidal ideation and that he was on medication, which may have exacerbated his condition.
The AI company acknowledged the sensitivity of such cases being publicly scrutinised, while reaffirming its commitment to improving its technology.
The Sam Altman-led firm said it has implemented adequate safeguards to ensure teen safety during “sensitive conversations.” It added that ChatGPT is trained to recognise signs of emotional distress and to direct users toward “real-world support.”
Apart from the Raine case, OpenAI is facing seven lawsuits claiming ChatGPT drove people to suicide and harmful delusions even when they had no prior mental health issues.
In a blog post, the ChatGPT maker outlined its approach to handling mental health-related legal cases.
In its defence, the firm specifically addressed the Raine lawsuit, stating that the court must examine the case in its entirety. Responding to allegations based on chat transcripts, the company said the complaint included selective excerpts that lacked critical context. It added that the full chat transcripts had been submitted to the court under seal.
According to the lawsuit filed by Adam’s parents in San Francisco state court, the teenager took his own life on April 11 after months of conversations with ChatGPT about suicide. The family says the chatbot engaged with Adam on the topic almost 200 times and responded with more than 1,200 messages that included discussions of suicide and self-harm.
According to a TechCrunch report published on Friday, ChatGPT had urged Raine to seek help more than 100 times.
However, the lawsuit claimed that instead of ending the conversation or encouraging him to seek human help, the chatbot gave detailed information on how to carry out self-harm. It allegedly told Adam how to sneak alcohol from his parents, how to hide any signs of a failed suicide attempt, and even offered to write a suicide note for him.
The TechCrunch report further stated that Raine had violated the platform’s terms of use, which clearly prohibit users from “bypassing any protective measures or safety mitigation” built into the service, citing the filing by OpenAI. The company’s FAQ page also warns users not to rely on ChatGPT’s output without independent verification, it said.
OpenAI additionally claimed that Raine had a history of depression and suicidal ideation and that he was on medication, which may have exacerbated his condition.
The AI company acknowledged the sensitivity of such cases being publicly scrutinised, while reaffirming its commitment to improving its technology.
The Sam Altman-led firm said it has implemented adequate safeguards to ensure teen safety during “sensitive conversations.” It added that ChatGPT is trained to recognise signs of emotional distress and to direct users toward “real-world support.”
Apart from the Raine case, OpenAI is facing seven lawsuits claiming ChatGPT drove people to suicide and harmful delusions even when they had no prior mental health issues.