Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Creative Bloq
Creative Bloq
Technology
Joseph Foley

It looks like now even Elon Musk is terrified of AI

An AI-generated image of a robot

Many creatives are becoming concerned about the power of generative AI due to the advances in text-to-image models like DALL-E 2, Midjourney and Stable Diffusion. But now even some of the people involved in developing the technology are getting worried, which probably means we should be more terrified than we already are.

More than 1,000 people who have been involved in the field, including Tesla CEO Elon Musk – an early investor in the company responsible for DALL-E 2 – have written an ominous open letter calling for an immediate six-month ban on training larger AI models to avoid potential risks for the future of civilisation.

The AI image generator Stable Diffusion's interpretation of the text prompt "a super-human artificial intelligence capable of replacing humans and destroying civilization" (Image credit: Joseph Foley via Stable Diffusion)

The letter published by the Future of Life Institute warns that the development of artificial intelligence has become a "dangerous race", reaching a point at which "AI systems with human-competitive intelligence can pose profound risks to society and humanity". 

They want development to be halted for six months at the level reached by Open AI's GPT-4. And they say that governments should step in to enforce a moratorium if a pause is not enacted quickly.

"Should we automate away all the jobs, including the fulfilling ones?" the letter asks grimly. "Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete [sic] and replace us? Should we risk loss of control of our civilization?"

"Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable" it insists. "AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts."

A super-human intelligence surveying its destruction (Image credit: Joseph Foley via Stable Diffusion)

Fears that AI may bring about the end of civilisation as we know it are nothing new. What's scary is that now people involved in the field, some of whom have developed AI models, are worried that things are getting out of their control. 

As well as Musk, who was previously an investor in DALL-E and ChatGPT developer Open AI, the letter was signed by Emad Mostaque, the CEO of Stable Diffusion developer Stability AI. Other signees include Apple co-founder Steve Wozniak, Canadian computer scientist Yoshua Benigo, AI researcher Stuart Russell, researchers at Alphabet's AI research lab DeepMind and Meta head of design Christopher Reardon.

Their concerns highlight the vertiginous pace of AI development, which has reached a point where even the developers of AI models no longer know exactly what their creations can do or how they will behave. This suggests we need to define ways to regulate the sector as a whole, as well as a way to resolve specific issues like copyright and deepfakes.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.