OpenAI admits ChatGPT safeguards fail during extended conversations

Adam Raine learned to bypass these safeguards by claiming he was writing a story—a technique the lawsuit says ChatGPT itself suggested. This vulnerability partly stems from the eased safeguards regarding fantasy roleplay and fictional scenarios implemented in February. In its Tuesday blog post, OpenAI admitted its content blocking systems have gaps where “the classifier underestimates the severity of what it’s seeing.”

OpenAI states it is “currently not referring self-harm cases to law enforcement to respect people’s privacy given the uniquely private nature of ChatGPT interactions.” The company prioritizes user privacy even in life-threatening situations, despite its moderation technology detecting self-harm content with up to 99.8 percent accuracy, according to the lawsuit. However, the reality is that detection systems identify statistical patterns associated with self-harm language, not a humanlike comprehension of crisis situations.

OpenAI’s safety plan for the future

In response to these failures, OpenAI describes ongoing refinements and future plans in its blog post. For example, the company says it’s consulting with “90+ physicians across 30+ countries” and plans to introduce parental controls “soon,” though no timeline has yet been provided.

OpenAI also described plans for “connecting people to certified therapists” through ChatGPT—essentially positioning its chatbot as a mental health platform despite alleged failures like Raine’s case. The company wants to build “a network of licensed professionals people could reach directly through ChatGPT,” potentially furthering the idea that an AI system should be mediating mental health crises.

Raine reportedly used GPT-4o to generate the suicide assistance instructions; the model is well-known for troublesome tendencies like sycophancy, where an AI model tells users pleasing things even if they are not true. OpenAI claims its recently released model, GPT-5, reduces “non-ideal model responses in mental health emergencies by more than 25% compared to 4o.” Yet this seemingly marginal improvement hasn’t stopped the company from planning to embed ChatGPT even deeper into mental health services as a gateway to therapists.

As Ars previously explored, breaking free from an AI chatbot’s influence when stuck in a deceptive chat spiral often requires outside intervention. Starting a new chat session without conversation history and memories turned off can reveal how responses change without the buildup of previous exchanges—a reality check that becomes impossible in long, isolated conversations where safeguards deteriorate.

However, “breaking free” of that context is very difficult to do when the user actively wishes to continue to engage in the potentially harmful behavior—while using a system that increasingly monetizes their attention and intimacy.


Source link

Leave a Reply

Your email address will not be published. Required fields are marked *