Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Jeremy Kahn

Musk and Wozniak among 1,100+ singing open letter calling for 6-month ban on creating powerful A.I.

Elon Musk photo. (Credit: Marlena Sloss—Bloomberg via Getty Images)

Elon Musk and Apple co-founder Steve Wozniak are among the prominent technologists and artificial intelligence researchers who have signed an open letter calling for a six month moratorium on the development of advanced A.I. systems

In addition to the Tesla CEO and Apple co-founder, the more than 1,100 signatories of the letter include Emad Mostaque, the founder and CEO of Stability AI, the company that helped create the popular Stable Diffusion text-to-image generation model, and Connor Leahy, the CEO of Conjecture, another A.I. lab. Evan Sharp, a co-founder of Pinterest and Chris Larson, a co-founder of cryptocurrency company Ripple, have also signed. Deep learning pioneer and Turing Award-winning computer scientist Yoshua Bengio signed too.

The letter urges technology companies to immediately cease training any A.I. systems that would be "more powerful than GPT-4," which is the latest large language processing A.I. developed by San Francisco company OpenAI. The letter does not say exactly how the "power" of a model should be defined, but in recent A.I. advances, capability has tended to be correlated to an A.I. model's size and the number of specialized computer chips needed to train it.

Runaway A.I.

Musk has previously been outspoken about his concerns about runaway A.I. and the threat it may pose to humanity. He was an original co-founder of OpenAI, establishing it as a non-profit research lab in 2015, and served as its biggest initial donor. In 2018, he broke with the company and left its board. More recently, he has been critical of the company’s decision to launch a for-profit arm and accept billions of dollars in investment from Microsoft.

OpenAI is now among the most prominent companies developing large foundation models, mostly trained on massive amount of text, images and videos culled from the internet. These models can perform many different tasks without specific training. Versions of these models power ChatGPT as well as Microsoft’s Bing chat feature and Google’s Bard.

It is the potential of these systems to do many different tasks—many once thought to be the sole province of highly-trained people, such as coding or drafting legal documents or and analyzing data—that has made many afraid about the potential of job losses from the deployment of such systems in business. Others fear that such systems are a step on the path towards A.I. that might exceed human intelligence, with potentially dire consequences.

'Human-competitive'

The letter says that with A.I. systems such as GPT-4 now “becoming human-competitive at general tasks,” there were concerns about risks from such systems being used to generate misinformation on a massive scale as well as about mass automation of jobs. The letter also raises the prospects of these systems being on the path to superintelligence that could pose a grave risk to all human civilization. It says that decisions about A.I. “must not be delegated to unelected tech leaders” and that more powerful A.I. systems should only “be developed once we are confident that their effects will be positive and their risks will be manageable.”

It calls for all A.I. labs to immediately stop training of A.I. systems more powerful than GPT-4 for at least six months and says that the moratorium should be “verifiable.” The letter does not say how such verification would work, but it says that if the companies themselves do not agree to a pause, then governments around the world “should step in and institute a moratorium.”

The letter says that the development and refinement of existing A.I. systems can continue, but that the training of newer, even more powerful ones should be paused. “A.I. research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal,” the letter says.

It also says that during the six month pause A.I. companies and academic researchers should develop a set of shared safety protocols for A.I. design and development that could be independently audited and overseen by unnamed outside experts.

'Robust' governance

The letter also calls on governments to use the six-month window to “dramatically accelerate development of robust A.I. governance systems.”

It says such a regulatory framework should include new authorities capable of tracking and overseeing the development of advanced A.I. and the large data centers used to train it. It also says governments should develop ways to watermark and establish the provenance of A.I.-generated content as both a way to guard against deepfakes and to discover if any companies have violated the moratorium and other governance structures. It adds that governments should also enact liability rules for “A.I.-caused harm” and increase public funding for A.I. safety research.

Finally, it says governments should establish “well-resourced institutions” for dealing with the economic and political disruption advanced A.I. will cause. These should at a minimum include: new and capable regulatory authorities dedicated to A.I."

The letter was put out under the auspices of the Future of Life Institute. The organization was co-founded by MIT physicist Max Tegmark and former Skype co-founder Jaan Tallinn and has been among the most vocal organizations calling for more regulation of the use of A.I.

Neither OpenAI nor any of the large technology companies developing these powerful A.I. models have commented yet on the open letter.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.