For AI researchers Eliezer Yudkowsky and Nate Soares, authors of the new, unambiguously titled book If Anyone Builds it, Everyone Dies, it’s time to freak out about the technology.
“Humans have a long history of not wanting to sound alarmist,” Yudkowsky said in an interview with Semafor before the book’s publication next week. “Will some people be turned off? Maybe. Someone, at some point, just has to say what’s actually happening and then see how the world responds.”
What is happening, according to the book, is that most of the big technology companies and AI startups like Anthropic and OpenAI are building software they don’t understand (the authors argue it’s closer to alchemy than science). At some point, if these firms continue along their current path, large language models will grow powerful enough that one of them will break free from human control. Before we even realize what’s happening, humanity’s fate will be sealed and the AI will devour earth’s resources to power itself, snuffing out all organic life in the process.
With such a dire and absolute conclusion, the authors leave no room for nuance or compromise. Building the technology more slowly, or building something else, isn’t put forth as an option. Even companies like Safe Superintelligence, started by former OpenAI executive Ilya Sutskever, should shut down, according to Yudkowsky and Soares.
In response to this bleak picture, some people are, indeed, turned off. Stephen Marche, writing for The New York Times, likened reading the book to hanging out with “the most annoying students you met in college while they try mushrooms for the first time.”
Source link