Harry And Meghan, Steve Bannon And More Sign Petition Urging AI ‘Superintelligence’ Ban

Topline

Hundreds of public figures—including billionaires, former White House officials, prominent AI researchers, Nobel laureates, members of the British royal family and right-wing media figures—signed a petition calling for a ban on the development of “superintelligence, ”an advanced form of artificial intelligence that is expected to surpass human cognitive ability.

Key Facts

The petition, organized by the non-profit Future of Life Institute, whose website names Elon Musk as an external advisor, calls for a “prohibition on the development of superintelligence.”

The petition says this moratorium should remain in place until there is “broad scientific consensus that it will be done safely and controllably” and “strong public buy-in.”

Billionaire Richard Branson, Apple co-founder Steve Wozniak and the Duke and Duchess of Sussex, Harry and Meghan are among 850 public figures who have signed the petition as of Wednesday morning.

Right-wing media figures Glenn Beck and Steve Bannon were also listed as verified signatories, along with evangelical leader Johnnie Moore, who previously served as an adviser to President Donald Trump.

Several prominent AI researchers and scientists also signed the petition, including Yoshua Bengio and Nobel Laureate Geoffrey Hinton—who have been referred to as the “Godfathers of AI”—and UC Berkeley’s Stuart Russell, the co-author of one of the most definitive textbooks on AI.

Former U.S. National Security Adviser Susan Rice and former chairman of the Joint Chiefs of Staff, Michael Mullen, were among the prominent former government officials to sign the petition.

What Have The Signatories Said?

In a statement accompanying his signature, Yoshua Bengio wrote: “Frontier AI systems could surpass most individuals across most cognitive tasks within just a few years…To safely advance toward superintelligence, we must scientifically determine how to design AI systems that are fundamentally incapable of harming people, whether through misalignment or malicious use. We also need to make sure the public has a much stronger say in decisions that will shape our collective future.” Stuart Russell noted that the petition was not calling for “ban or even a moratorium in the usual sense” but rather a “proposal to require adequate safety measures for a technology that, according to its developers, has a significant chance to cause human extinction.”

Tangent

In March 2023, the Future of Life Institute published another petition calling for a pause in “Giant AI Experiments,” which was signed by Elon Musk and several others who also signed Wednesday’s petition. Musk, who now heads his own AI initiatives under his company xAI, has not signed the latest petition and it is unclear if he will join in. On its website, the non-profit lists Musk as an external advisor and says the billionaire has “highlighted the potential risks from advanced AI.” The institute’s research program on AI began in 2015, backed by a $10 million donation from Musk.

What Do We Know About Public Polling About Ai Safety?

In September, Gallup published a poll showing 80% of U.S. adults believe the government should maintain rules for AI safety and data security, even if it slows down the speed of AI development. Only 9% U.S. adults surveyed by the poll were in favor of speeding up the development of AI capabilities, even if it meant more lax AI safety and data security rules. Support for AI safety rules cuts across both parties, with 88% of Democrats and 79% of Republicans, and independents favoring it. The Future of Life Institute’s own poll found that 64% U.S. adults believe “superhuman AI should not be developed until it is proven safe and controllable, or should never be developed.”

Further Reading

OpenAI Blocks Sora Deepfakes Of Martin Luther King After ‘Disrespectful Depictions’ (Forbes)


Source link

Leave a Reply

Your email address will not be published. Required fields are marked *