ChatGPT teen-safety measures to include age verification, OpenAI says

ChatGPT developer OpenAI announced new teen safety features Tuesday, including an age-prediction system and ID age verification in some countries.

In a blog post, OpenAI CEO Sam Altman described the struggles of balancing OpenAI’s priorities of freedom and safety, saying: “We prioritize safety ahead of privacy and freedom for teens; this is a new and powerful technology, and we believe minors need significant protection.”

Altman wrote that the company was working to build a system that would try to automatically sort users into one of two separate versions of ChatGPT: One for adolescents 13 to 17, and one for adults 18 and older.

“If there is doubt, we’ll play it safe and default to the under-18 experience,” Altman wrote. “In some cases or countries we may also ask for an ID; we know this is a privacy compromise for adults but believe it is a worthy tradeoff.”

The company made the announcement hours before a Senate Judiciary Committee hearing on the potential harm of AI Chatbots was scheduled to start. Last month, a family sued OpenAI, saying ChatGPT functioned as a “suicide coach” and led to the death of their son.

In a separate blog post, the company said it will release parental controls at the end of the month that will let parents instruct ChatGPT how to respond to their children and adjust settings like memory and blackout hours.

Altman also noted that ChatGPT is not intended for people under 12, though the chatbot currently has no safeguards preventing children from using it. OpenAI didn’t immediately respond to a request for comment about children using its services

Altman indicated that discussion of suicide should not be fully censored from ChatGPT. The chatbot “by default should not provide instructions about how to commit suicide, but if an adult user is asking for help writing a fictional story that depicts a suicide, the model should help with that request,” he said.

If a person flagged by OpenAI’s age estimating program expresses suicidal ideation, the company “will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm,” he wrote.

“I don’t expect that everyone will agree with these tradeoffs, but given the conflict it is important to explain our decisionmaking,” Altman wrote on X.


Source link

Leave a Reply

Your email address will not be published. Required fields are marked *