Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Chloe Taylor

‘The Godfather of A.I.’ warns of ‘nightmare scenario’ where artificial intelligence begins to seek power

Geoffrey Hinton, chief scientific adviser at the Vector Institute, speaks during The International Economic Forum of the Americas (IEFA) Toronto Global Forum in Toronto, Ontario, Canada, on Thursday, Sept. 5, 2019. (Credit: Cole Burston—Bloomberg/Getty Images)

The so-called Godfather of A.I. continues to issue warnings about the dangers advanced artificial intelligence could bring, describing a “nightmare scenario” in which chatbots like ChatGPT begin to seek power.

In an interview with the BBC on Tuesday, Geoffrey Hinton—who announced his resignation from Google to the New York Times a day earlier—said the potential threats posed by A.I. chatbots like OpenAI’s ChatGPT were “quite scary.”

“Right now, they’re not more intelligent than us, as far as I can tell,” he said. “But I think they soon may be.”

“What we’re seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has, and it eclipses them by a long way,” he added.

“In terms of reasoning, it’s not as good, but it does already do simple reasoning. And given the rate of progress, we expect things to get better quite fast—so we need to worry about that.”

Hinton’s research on deep learning and neural networks—mathematical models that mimic the human brain—helped lay the groundwork for artificial intelligence development, earning him the nickname “the Godfather of A.I.”

He joined Google in 2013 after the tech giant bought his company, DNN Research, for $44 million.

‘A nightmare scenario’

While Hinton told the BBC on Tuesday that he believed Google had been “very responsible” when it came to advancing A.I.’s capabilities, he told the Times on Monday that he had concerns about the tech’s potential should a powerful version fall into the wrong hands.

When asked to elaborate on this point, he said: “This is just a kind of worst-case scenario, kind of a nightmare scenario.

“You can imagine, for example, some bad actor like [Russian President Vladimir] Putin decided to give robots the ability to create their own subgoals.”

Eventually, he warned, this could lead to A.I. systems creating objectives for themselves like: “I need to get more power.”

“I’ve come to the conclusion that the kind of intelligence we’re developing is very different from the intelligence we have,” Hinton told the BBC.

“We’re biological systems, and these are digital systems. And the big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world.

“All these copies can learn separately but share their knowledge instantly, so it’s as if you had 10,000 people and whenever one person learned something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.”

Hinton’s conversation with the BBC came after he told the Times he regrets his life’s work because of the potential for A.I. to be misused.

“It is hard to see how you can prevent the bad actors from using it for bad things,” he said on Monday. “I console myself with the normal excuse: If I hadn’t done it, somebody else would have.”

Since announcing his resignation from Google, Hinton has been vocal about his concerns surrounding artificial intelligence.

In another separate interview with the MIT Technology Review published on Tuesday, Hinton said he wanted to raise public awareness of the serious risks he believes could come with widespread access to large language models like GPT-4.

“I want to talk about A.I. safety issues without having to worry about how it interacts with Google’s business,” he told the publication. “As long as I’m paid by Google, I can’t do that.”

He added that people’s outlook on whether superintelligence was going to be good or bad depends on whether they are optimists or pessimists—and noted that his own opinions on whether A.I.’s capabilities could outstrip those of humans had changed.

“I have suddenly switched my views on whether these things are going to be more intelligent than us,” he said. “I think they’re very close to it now, and they will be much more intelligent than us in the future. How do we survive that?”

Wider concern

Hinton isn’t alone in speaking out about the potential dangers that advanced large language models could bring.

In March, more than 1,100 prominent technologists and artificial intelligence researchers—including Elon Musk and Apple cofounder Steve Wozniak—signed an open letter calling for the development of advanced A.I. systems to be put on a six-month hiatus.

Musk had previously voiced concerns about the possibility of runaway A.I. and “scary outcomes” including a Terminator-like apocalypse, despite being a supporter of the technology.

OpenAI—which was cofounded by Musk—has publicly defended its chatbot phenomenon amid rising concerns about the technology’s potential and the rate at which it is progressing.

In a blog post published earlier this month, the company admitted that there were “real risks” linked to ChatGPT, but argued that its systems were subjected to “rigorous safety evaluations.”

When GPT-4—the successor to the A.I. model that powered ChatGPT—was released in March, Ilya Sutskever, OpenAI’s chief scientist, told Fortune the company’s models were “a recipe for producing magic.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.