Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tom’s Guide
Tom’s Guide
Technology
Christoph Schwaiger

OpenAI insiders warn of 'human extinction' risk from AI systems — urging better whistleblower protections

Sam Altman CEO of OpenAI .

A group of current and former OpenAI and Google DeepMind employees have claimed that AI companies “possess substantial non-public information about the capabilities and limitations of their systems” which they cannot be relied on to share voluntarily.

The claim was made in a widely reported open letter in which the group highlighted what they described as “serious risks” posed by AI. 

These risks include the further entrenchment of existing inequalities, manipulation and misinformation, and the loss of control of autonomous AI systems leading to possible “human extinction." They lamented about the lack of effective oversight and called for increased protections for whistleblowers.

The letter’s authors said they believe AI can bring unprecedented benefits to society and that the risks they highlighted can be reduced with the involvement of scientists, policymakers, and the general public. However, they said that AI companies have financial incentives to avoid effective oversight.

Ordinary whistleblower protections 'insufficient'

(Image credit: Shutterstock)

Claiming that AI companies know about the risk levels of different kinds of harm and the adequacy of their protective measures, the group of employees said the companies only have weak obligations to share this kind of information with governments “and none with civil society." They added that broad confidentiality agreements and blocking them from voicing their concerns publicly.

“Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated,” they wrote.

They called on advanced AI companies not to retaliate against risk-related criticism and to create an anonymous system for employees to raise their concerns.

In May, Vox reported that former OpenAI employees are forbidden from criticizing their former employer for the rest of their lives. If they refuse to sign the agreement, they could lose all their vested equity earned during their time working for the company. OpenAI CEO Sam Altman later posted on X saying the standard exit paperwork would be changed.

In response to the open letter, a spokesperson for OpenAI told the The New York Times the company is proud of its track record providing the most capable and safest AI systems and that it believes in its scientific approach to addressing risk.

“We agree that rigorous debate is crucial given the significance of this technology, and we’ll continue to engage with governments, civil society and other communities around the world,” the spokesperson said.

A Google spokesperson declined to comment.

The letter was signed by 13 current and former employees. All the current OpenAI employees who signed did so anonymously.

The AI world is no stranger to such open letters. Most famously, an open letter published by the Future of Life Institute that was signed by the likes of Elon Musk and Steve Wozniak had called for a 6-month pause in AI development — a call which went ignored.

More from Tom's Guide

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.