Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Rachel Shin

Elon Musk wants to create a superintelligent A.I. because he thinks a smarter A.I. is less likely to wipe out humanity

(Credit: Nathan Laine—Bloomberg/Getty Images)

Elon Musk was a cofounder of OpenAI—the maker of the popular A.I. chatbot ChatGPT—before pulling out of the company due to conflicts with CEO Sam Altman and the board. Now he claims that his own A.I. venture, xAI, will rival OpenAI in achieving the lofty goal of artificial superintelligence.

In a nearly two hour-long Twitter Spaces talk on Friday, the world’s richest man discussed his company’s goal of creating an AGI (artificial general intelligence), meaning an A.I. that is at least as smart as a human. 

“The overarching goal of xAI is to build a good AGI with the overarching purpose of just trying to understand the universe,” Musk said during the talk. “The safest way to build an A.I. is actually to make one that is maximally curious and truth-seeking.”

The Tesla CEO added that influencing the future was part of his motivation in running xAI, as he didn’t want to be sidelined in the current A.I. race. Spectators don’t get to decide “the outcome,” he explained.  

AGI is often conceptualized as a machine that can perform any task a human can and is as intelligent as a human. It has long been a theme in science fiction, framed as an existential threat with enough intelligence and autonomy to destroy humanity. Musk thinks AGI is “inevitable,” so he wants to have a hand in making sure it’s developed according to his standards, he said. 

Musk added that a “unipolar” future in which one company monopolizes A.I. would be undesirable. His comments come after OpenAI’s July 5 blog post about superintelligence, which also predicted that AGI is inevitable and could even arrive within the decade.

“Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue,” OpenAI cofounder Ilya Sutskever and colleague Jan Leike wrote. “Humans won’t be able to reliably supervise AI systems much smarter than us.”

OpenAI’s Altman has made similar doomsday remarks about the technology he helped pioneer, comparing the threat of superintelligence to pandemics and nuclear war. He’s also expressed fear that an “autocratic regime” with access to the technology could use it to harm the world. 

Musk takes an opposite approach, though—he thinks superintelligent AGI would actually be the most-human friendly, if it took a liking to us.

“My sort of theory behind the maximally curious, maximally truthful as being probably the safest approach is that I think to a superintelligence, humanity is much more interesting than not humanity,” Musk said on Twitter Spaces. “Look at the various planets in our solar system, the moons and the asteroids, and really probably all of them combined are not as interesting as humanity.”

If xAI creates a machine smart enough to find humans more amusing than rocks in space, we might just stand a chance at survival, according to the billionaire. His theory calls to mind the classic 1967 short story “I Have No Mouth, and I Must Scream” about a supercomputer called AM has consumed the entire world and destroyed all of humanity, sparing only five unlucky survivors. Having already eradicated and gained complete control of the world, AM tortures the five human survivors for eternity as its only form of entertainment (or revenge). 

It’s likely the “maximally curious” superintelligence that Musk strives for would indeed find humanity interesting—but that may still end up as a net negative for humans, if life imitates art.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.