Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Independent UK
The Independent UK
National
Josh Marcus

‘Human extinction’: OpenAI workers raise alarm about the risks of artificial intelligence

Copyright 2023 The Associated Press. All rights reserved.

A group of current and former employees at top Silicon Valley firms developing artificial intelligence warned in an open letter that without additional safeguards, AI could pose a threat of “human extinction.”

The letter, signed by 13 mostly former employees of firms like OpenAI, Anthropic, and Google’s DeepMind, argues top AI researchers need more protections to air criticisms of new developments and seek input from the public and policymakers over the direction of AI innovation.

“We believe in the potential of AI technology to deliver unprecedented benefits to humanity,” the Tuesday letter reads. “We also understand the serious risks posed by these technologies. These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction.”

The letter argues that the companies developing powerful AI technologies, including artificial general intelligence (AGI), a theorised AI system that’s as smart or smarter than human intelligence, “have strong financial incentives to avoid effective oversight,” from both their own employees and the public at large.

Neel Nanda of DeepMind is the only AI researcher currently affiliated with one of the copmanies who signed the letter.

“This was NOT because I currently have anything I want to warn about at my current or former employers, or specific critiques of their attitudes towards whistleblowers,” he wrote on X. “But I believe AGI will be incredibly consequential and, as all labs acknowledge, could pose an existential threat. Any lab seeking to make AGI must prove itself worthy of public trust, and employees having a robust and protected right to whistleblow is a key first step.”

Scarlett Johansson, who once voiced an AI assistant in the film Her , accused OpenAI of using her voice as a model for one of its products (PA Archive)

The message calls for companies to refrain from punishing or silencing current or former employees who speak out about the risks of AI, a likely reference to a scandal this month at OpenAI, where departing employees were told to choose between losing vested equity and or signing a non-disparagement agreement about the company that never expired. (OpenAI later lifted the requirement, saying, “It doesn’t reflect our values or the company we want to be.”)

“We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk,” an OpenAI spokesperson told The Independent. “We agree that rigorous debate is crucial given the significance of this technology and we’ll continue to engage with governments, civil society and other communities around the world.”

The company added that it takes a number of steps to ensure its employees are heard and its products are developed responsibly, including an anonymous hotline for workers and a Safety and Security Committee scrutizing the company’s developments. OpenAI also pointed to its support for increased AI regulation and voluntary committments around AI safety.

The open letter follows a season of controversies for top AI firms, especially OpenAI, at the same time as they introduce AI assistants and aids with powerful new capabilities like the ability to have live voice-to-voice conversations with humans and react to visual information like a video feed or written math problem.

Actress Scarlett Johansson, who once voiced an AI assistant in the film Her, accused OpenAI of using her voice as a model for one of its products, despite her explicit rejection of such an alleged proposal. Though OpenAI CEO tweeted the word “her” on the launch of the voice assistant, the company has since denied using Johansson’s voice as a model.

Also in May, OpenAI disbanded a team it formed specifically to research long-term risks of AI, less than a year after it was formed.

Numerous top researchers at the firm have left in recent months, including co-founder Ilya Sutskever.

It’s the latest shake-up since the company suffered a high-profile board battle last year, in which Altman was temporarily removed and then reinstated less than a week later.

The Independent has contacted Anthropic and Google for comment.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.