Artificial-intelligence tech giant OpenAI has announced plans to introduce sexually explicit material for its ChatGPT AI-based chatbot later this year — a move that a conservative advocacy group says risks introducing “real mental health harms from synthetic intimacy” without appropriate safeguards.
The National Center on Sexual Exploitation issued a statement calling on OpenAI to reverse its plan to allow “erotica” on ChatGPT. OpenAI CEO Sam Altman said in a post on X Tuesday that in December, “as we roll out age-gating more fully and as part of our ‘treat adult users like adults’ principle, we will allow even more [kinds of content], like erotica for verified adults” on ChatGPT.
According to Altman, the company originally made ChatGPT “pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.”
Altman continued, “Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.” The change will mean that ChatGPT will be able “to respond in a very human-like way, or use a ton of emoji, or act like a friend” — or, evidently, talk dirty to you.
NCOSE, formerly known as Morality in Media, argues that OpenAI is adding sexual content to the AI chatbot “without credible safeguards.”
“Sexualized AI chatbots are inherently risky, generating real mental health harms from synthetic intimacy; all in the context of poorly defined industry safety standards,” said Haley McNamara, executive director and chief strategy Officer at the National Center on Sexual Exploitation. “While [OpenAI’s] age verification is a good step to try preventing childhood exposure to explicit content, the reality is these tools have documented harms to adults as well. We’ve already seen other chatbots emboldened to engage in sexual conversation simulate themes of child abuse or push sexually violent written content on users who asked them to stop.”
McNamara concluded, “If OpenAI truly cares about user well-being, it should pause any plans to integrate this so-called ‘erotica’ into ChatGPT and focus on building something positive for humanity.”
In a social media post Wednesday, Altman wrote that his “tweet about upcoming changes to ChatGPT blew up on the erotica point much more than I thought it was going to! It was meant to be just one example of us allowing more user freedom for adults.” He said OpenAI is “making a decision to prioritize safety over privacy and freedom for teenagers” and said that “we are not loosening any policies related to mental health.”
At the same time, Altman said, “We also care very much about the principle of treating adult users like adults,” adding that “we are not the elected moral police of the world. In the same way that society differentiates other appropriate boundaries (R-rated movies, for example) we want to do a similar thing here.”
Washington, D.C.-based NCOSE calls itself “the leading national non-profit organization exposing the links between all forms of sexual exploitation such as child sexual abuse, prostitution, sex trafficking and the public health harms of pornography.”
Separately, OpenAI has come under fire from the entertainment industry for its Sora 2 video-generation system, which is able to produce recognizable movie and TV characters. Amid the backlash, the Motion Picture Association called on OpenAI to take “immediate action” to fix its copyright opt-out system to shoulder the burden of policing copyright infringement.
Source link