Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Orianna Rosa Royle

‘Godfather of A.I.’ to talk to Elon Musk, Bernie Sanders and the White House

Geoffrey Hinton, chief scientific adviser at the Vector Institute, speaks during The International Economic Forum of the Americas (IEFA) Toronto Global Forum in Toronto, Ontario, Canada, on Thursday, Sept. 5, 2019. The Toronto Global Forum is a non-profit organization fostering dialogue on national and global issues that brings together heads of states, central bank governors, ministers and global economic decision makers. (Credit: Cole Burston—Bloomberg/Getty Images)

Geoffrey Hinton, the 75-year-old tech trailblazer who recently said he regrets his life’s work in the field of artificial intelligence because of the threat it poses to humanity, has been inundated with requests to share his knowledge. 

In fact, he claims, he has been receiving such requests to talk every two minutes since he announced his resignation from Google so that he can speak freely about “the dangers of A.I.”

Now, the Turing award-winning scientist often referred to as the “godfather of artificial intelligence” has told the Guardian that he will share his wisdom with Bernie Sanders, Elon Musk and the White House—but they might not like the advice he has to offer.

“The U.S. government inevitably has a lot of concerns around national security. And I tend to disagree with them,” he said. “For example, I’m sure that the defense department considers that the only safe hands for this stuff is the U.S. defense department—the only group of people to actually use nuclear weapons.”

The London-born pioneer added that as a “socialist”, he’s not on board with the “private ownership of the media and of the means of computation”.

“If you view what Google is doing in the context of a capitalist system, it’s behaving as responsibly as you could expect it to do," he said. "But that doesn’t mean it’s trying to maximize utility for all people: it’s legally obliged to maximize utility for its shareholders, and that’s a very different thing.”

Hinton’s advice for policymakers

It’s clear that the dangers of A.I. is a growing concern among the world’s most powerful people and policymakers. 

The chief executives of Alphabet Inc's Google, Microsoft, OpenAI and Anthropic are today meeting with the White House to discuss how to ensure their A.I. products are safe before public release. 

It’s why Elon Musk, Apple cofounder Steve Wozniak and over 1,100 prominent technologists and artificial intelligence researchers are calling for a 6-month ban on any A.I. developments so that governments can catch up and put in place “governance systems”.

Meanwhile, Sen. Bernie Sanders recently proposed introducing a robot tax to mitigate “millions of workers” losing their jobs at the hands of billionaires.

Having spent decades researching deep learning and laying the groundwork for A.I., one might have expected Hinton to have some sage advice for policymakers on how to safely move forwards.

Unfortunately, even the “godfather of artificial intelligence” is stumped for answers.

“I’m not a policy guy,” he told the Guardian. “I’m just someone who’s suddenly become aware that there’s a danger of something really bad happening. I wish I had a nice solution, like: ‘Just stop burning carbon, and you’ll be OK.’ But I can’t see a simple solution like that.”

Why Hinton’s “not that optimistic” about the future

In 1972, Hinton began his career as a graduate student at the University of Edinburgh, where he first started his research on neural networks, mathematical models that roughly mimic the workings of the human brain and are capable of analyzing vast amounts of data.

He and two of his students went on to launch DNNresearch—which Google acquired in 2013 for $44 million—where their breakthrough Turing Award-winning research would ultimately pave the way for the creation of A.I. today.

Looking back over the past 50 years of his career, Hinton told the Guardian that he’s “been trying to make computer models that can learn stuff a bit like the way the brain learns”.

But it was only until “very recently” that he realized these “big models are actually much better than the brain”. 

“We need to think hard about it now, and if there’s anything we can do,” he said while warning that it’s not looking promising. 

“The reason I’m not that optimistic is that I don’t know any examples of more intelligent things being controlled by less intelligent things,” he added.

“You need to imagine something that is more intelligent than us by the same degree that we are more intelligent than a frog. It’s all very well to say: ‘Well, don’t connect them to the internet,’ but as long as they’re talking to us, they can make us do things.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.