OpenAI will apply new restrictions to ChatGPT users under 18

OpenAI CEO Sam Altman announced on Tuesday a raft of new user policies, including a pledge to significantly change how ChatGPT interacts with users under the age of 18.

“We prioritize safety ahead of privacy and freedom for teens,” the post reads. “This is a new and powerful technology, and we believe minors need significant protection.”

The changes for underage users deal specifically with conversations involving sexual topics or self-harm. Under the new policy, ChatGPT will be trained to no longer engage in “flirtatious talk” with underage users, and additional guardrails will be placed around discussions of suicide. If an underage user uses ChatGPT to imagine suicidal scenarios, the service will attempt to contact their parents or, in particularly severe cases, local police.

Sadly, these scenarios are not hypotheticals. OpenAI is currently facing a wrongful death lawsuit from the parents of Adam Raine, who died by suicide after months of interactions with ChatGPT. Character.AI, another consumer chatbot, is facing a similar lawsuit. While the risks are particularly urgent for underage users considering self-harm, the broader phenomenon of chatbot-fueled delusion has drawn widespread concern, particularly as consumer chatbots have become capable of more sustained and detailed interactions.

Along with the content-based restrictions, parents who register an underage user account will have the power to set “blackout hours” in which ChatGPT is not available, a feature that was not previously available.

The new ChatGPT policies come on the same day as a Senate Judiciary Committee hearing titled “Examining the Harm of AI Chatbots,” announced by Sen. Josh Hawley (R-MO) in August. Adam Raine’s father is scheduled to speak at the hearing, among other guests.

The hearing will also focus on the findings of a Reuters investigation that unearthed policy documents apparently encouraging sexual conversations with underage users. Meta updated its chatbot policies in the wake of the report.

Techcrunch event

San Francisco
|
October 27-29, 2025

Separating underage users will be a significant technical challenge, and OpenAI detailed its approach in a separate blog post. The service is “building toward a long-term system to understand whether someone is over or under 18,” but in the many ambiguous cases, the system will default toward the more restrictive rules. For concerned parents, the most reliable way to ensure an underage user is recognized is to link the teen’s account to an existing parent account. This also enables the system to directly alert parents when the teen user is believed to be in distress.

But in the same post, Altman emphasized OpenAI’s ongoing commitment to user privacy and giving adult users broad freedom in how they choose to interact with ChatGPT. “We realize that these principles are in conflict,” the post concludes, “and not everyone will agree with how we are resolving that conflict.”

If you or someone you know needs help, call 1-800-273-8255 for the National Suicide Prevention Lifeline. You can also text HOME to 741-741 for free, 24-hour support from the Crisis Text Lineor text or call 988. Outside of the U.S., please visit the International Association for Suicide Prevention for a database of resources.


Source link

Leave a Reply

Your email address will not be published. Required fields are marked *