Tech Leaders Endorse AI 'Extinction' Bombshell Statement

As generative AI developments in recent months have reached a fever pitch, some top leaders in the field have issued a grim extinction warning. But others caution against alarmism and want to take a measured mitigation approach.

Shane Snider , Senior Writer, InformationWeek

May 30, 2023

3 Min Read
A biomechanical humanoid examining a human skull.
MasPix via Alamy Stock

The Center for AI Safety (CAIS) on Tuesday unveiled a blunt single-sentence statement -- signed by executives from OpenAI and DeepMind, along with Turing Award winners and leading AI researchers -- calling for action to mitigate the “risk of extinction from AI.”

The statement from CAIS reads simply:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The gloomy words come just two weeks after OpenAI CEO Sam Altman appeared before US Congress on May 17 asking lawmakers to regulate the industry. (Two days later, Altman's company OpenAI subsequently released a free ChatGPT app for the iPhone and confirmed plans to release the app to the Android soon.)

Signees of the CAIS statement include Turing Award winners Geoffrey Hinton and Yoshua Bengio, OpenAI chief scientist Ilya Sutskever, OpenAI CTO Mira Murati, DeepMind CEO Dario Amodei, OpenAI's Altman and professors from the University of California at Berkeley, Stanford University, and the Massachusetts Institute of Technology (MIT).

According toa press release from CAIS,the statement represents “a historic coalition of AI experts – along with philosophers, ethicists, legal scholars, economists, physicists, political scientists, pandemic scientists, nuclear scientists, and climate scientists – establishing the risk of extinction from advanced, future AI systems as one of the world’s most important problems."

“We need to be having the conversations that nuclear scientists were having before the creation of the atomic bomb,” Dan Hendrycks, director of CAIS, said in a statement. “Pandemics were not on the public’s radar before COVID-19. It’s not too early to put guardrails in place and set up institutions so that AI risks don’t catch us off guard.”               

Two AI Camps Collide

Geoff Schaefer, head of Responsible AI at Booz Allen Hamilton, said while the risks of AI are very real, it may be premature to take an overly alarmist approach. The issue, Schaefer told InformationWeek, has spawned two camps when it comes to responsible AI development -- one focused on long-term AI and another focused on short-term risks and safety issues.

“These two sides sort of naturally pitted against each other and I don’t think that’s very helpful,” Schaefer says. “And what that has resulted in is statements like this that I think are both accurate and overly alarmist. We are not having a rational enough discussion about how the short-term risks and long-term risks are interdependent.”

He added, “For me, the five-alarm-fire bell has not gone off. We need to stay the course as we have really, really important use cases for these technologies. Let’s just continue to pursue those safely.”

Credo AI is working with Booz Allen to establish responsible AI solutions. Susannah Shattuck, head of product at Credo AI, says the statement stuck a similar tone to the earlier (and longer) letter from the Future of Life Institute calling for a pause in further AI development. “One of the things that letter got wrong, and that this statement potentially gets wrong, is the response to this long-term risk. Even if it is a tiny fraction of likelihood risk of massive human impacts from super-intelligent AI systems, the response should not be to stop development of these systems, but instead to focus on developing safety measures at the same pace.”

She adds, “We’re not going to stop this train and what we can do is really steer this technology towards a future that is positive.”

Writer and futurist Daniel Jeffries offered a stronger rebuke of the statement, tweeting, “AI risks and harms are now officially a status game where everyone piles onto the bandwagon to make themselves look good… So why do people keep harping on this? Looks good. Costs nothing. Sounds good. That’s about it.”

But CAIS seems to be taking the threat very seriously.

"The world has successfully cooperated to mitigate risks related to nuclear war. The same level of effort is needed to address the dangers posed by future AI systems,” CAIS’s Hendrycks said.

What to Read Next:

Citing Risks to Humanity, AI & Tech Leaders Demand Pause on AI Research

OpenAI CEO Sam Altman Pleads for AI Regulation

How Will the AI Bill of Rights Affect AI Development?

About the Author(s)

Shane Snider

Senior Writer, InformationWeek, InformationWeek

Shane Snider is a veteran journalist with more than 20 years of industry experience. He started his career as a general assignment reporter and has covered government, business, education, technology and much more. He was a reporter for the Triangle Business Journal, Raleigh News and Observer and most recently a tech reporter for CRN. He was also a top wedding photographer for many years, traveling across the country and around the world. He lives in Raleigh with his wife and two children.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights