Get all your news in one place.
100’s of premium titles.
One app.
Start reading

Open letter sparks debate over "pausing" AI research over risks

An open letter calling for a six-month "pause" in work on advanced artificial intelligence is dividing the tech industry — not just between AI boosters and skeptics, but also between different factions of AI's critics.

Driving the news: The letter — initially signed by Elon Musk, Apple co-founder Steve Wozniak and other industry luminaries — urged "a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities."


  • Specifically, it said that AI labs should "immediately pause for at least 6 months the training of AI systems more powerful than GPT-4," the latest version of OpenAI's large language model, which it released two weeks ago.

What they're saying: The tech world was abuzz over the letter Wednesday, but not many expected to see either a voluntary industry slowdown or a government-mandated "pause" any time soon.

  • "There are no literal proposals in the actual moratorium," Box CEO Aaron Levie told Axios' Ina Fried on stage at the Axios What's Next Summit Wednesday. "It was just, 'Let's now spend the time to get together and work on this issue.' But it was signed by people that have been working on this issue for the past decade."
  • "There's a lot of conversation about, 'Let's pull the plug,' but I'm not sure there is a single plug," Arati Prabhakar, director of the White House Office of Science and Technology Policy, said in another interview by Ina at What's Next. Last fall OSTP unveiled its "Blueprint for an AI Bill of Rights."

Between the lines: The letter was organized by the Future of Life Institute, a nonprofit dedicated to "steering transformative technology...away from extreme large-scale risks."

  • The letter's argument roughly represents the position of "longtermist" AI critics, like Musk, who have been sounding alarms for more than a decade.
  • This view, initially laid out by philosopher Nick Bostrom and later embraced by many tech insiders, warns that an AI might end up — either through human malice or engineering error — with both the goal of destroying humanity and the power to achieve its aim.

Another camp of AI critics maintains that the letter and its advocates are inflating the power of large language models like OpenAI's.

  • ChatGPT is nowhere close to being the kind of "artificial general intelligence" (AGI) that might threaten humanity, they say — it's just auto-complete on steroids.
  • Worrying about some distant apocalypse draws our attention away from more immediate harms — including biased recommendations and misinformation — that are already occurring from AI systems now in use by government and private companies.
  • Emily Bender, a linguistics professor at the University of Washington who has been at the forefront of AI criticism in recent years, tweeted that the open letter was "dripping with #AIHype."

Of note: OpenAI itself has warned of the need to hedge against AI's dangers.

  • In a February blog post, OpenAI CEO Sam Altman wrote, "Some people in the AI field think the risks of AGI (and successor systems) are fictitious; we would be delighted if they turn out to be right, but we are going to operate as if these risks are existential."

Yes, but: Another vocal contingent in tech views the whole case for an AI pause as fundamentally at odds with the tech industry's entrepreneurial spirit and drive to innovate.

  • "The sky is not falling, and Skynet is not on the horizon," Daniel Castro, director of the Center for Data Innovation at the industry-funded Information Technology and Innovation Foundation, said in a statement.
  • "However, AI advances have the potential to create enormous social and economic benefits across the economy and society," Castro argues. "Rather than hitting pause on the technology, and allowing China to gain an advantage, the United States and its allies should continue to pursue advances in all branches of AI research."
  • Computer scientist Andrew Ng, an AI veteran, tweeted, "There is no realistic way to implement a moratorium and stop all teams from scaling up LLMs, unless governments step in. Having governments pause emerging technologies they don’t understand is anti-competitive, sets a terrible precedent, and is awful innovation policy."

Our thought bubble: The dynamics of startup capitalism and tech investment make the kind of coordination and restraint an AI "pause" would require extremely unlikely. Washington's political gridlock and slow learning curve make government action equally unlikely, and global geopolitics are pushing the U.S. toward accelerating AI instead.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.