Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Prarthana Prakash

‘The Godfather of A.I.’ just quit Google and says he regrets his life’s work because it can be hard to stop ‘bad actors from using it for bad things’

A picture of Geoffrey Hinton with his fists together (Credit: Cole Burston—Bloomberg/Getty Images)

Geoffrey Hinton is the tech pioneer behind some of the key developments in artificial intelligence powering tools like ChatGPT that millions of people are using today. But the 75-year-old trailblazer says he regrets the work he has devoted his life to because of how A.I. could be misused.

“It is hard to see how you can prevent the bad actors from using it for bad things,” Hinton told the New York Times in an interview published Monday. “I console myself with the normal excuse: If I hadn’t done it, somebody else would have.”

Hinton, often referred to as “the Godfather of A.I.,” spent years in academia before joining Google in 2013 when it bought his company for $44 million. He told the Times Google has been a “proper steward” for how A.I. tech should be deployed and that the tech giant has acted responsibly for its part. But he left the company in May so that he can speak freely about “the dangers of A.I.”

According to Hinton, one of his main concerns is how easy access to A.I. text- and image-generation tools could lead to more fake or fraudulent content being created, and how the average person would “not be able to know what is true anymore.” 

Concerns surrounding the improper use of A.I. have already become a reality. Fake images of Pope Francis in a white puffer jacket made the rounds online a few weeks ago, and deepfake visuals showing China invading Taiwan and banks failing under President Joe Biden if he is reelected were published by the Republican National Committee last week. 

As companies like OpenAI, Google, and Microsoft work on upgrading their A.I. products, there are also growing calls for slowing the pace of new developments and regulating the space that has expanded rapidly in recent months. In a March letter, some of the top names in the tech industry, including Apple cofounder Steve Wozniak and computer scientist Yoshua Bengio signed a letter asking for a ban on the development of advanced A.I. systems. Hinton didn’t sign the letter, although he believes that companies should think before scaling A.I. technology further. 

“I don’t think they should scale this up more until they have understood whether they can control it,” he said.

Hinton is also worried about how A.I. could change the job market by rendering nontechnical jobs irrelevant. He warned that A.I. had the capability to harm more types of roles as well. 

“It takes away the drudge work,” Hinton said. “It might take away more than that.”

When asked for a comment about Hinton’s interview, Google emphasized the company’s commitment to a “responsible approach.” 

“Geoff has made foundational breakthroughs in A.I., and we appreciate his decade of contributions at Google,” Jeff Dean, the company’s chief scientist, told Fortune in a statement. “As one of the first companies to publish A.I. principles, we remain committed to a responsible approach to A.I. We’re continually learning to understand emerging risks while also innovating boldly.”

Hinton did not immediately return Fortune’s request for comment.

A.I.’s ‘pivotal moment’

Hinton began his career as a graduate student at the University of Edinburgh in 1972. That’s where he first started his work on neural networks, mathematical models that roughly mimic the workings of the human brain and are capable of analyzing vast amounts of data.

His neural network research was the breakthrough concept behind a company he built with two of his students called DNNresearch, which Google ultimately bought in 2013. Hinton won the 2018 Turing Award—the equivalent of a Nobel Prize in the computing world—with his two other colleagues (one of whom was Bengio) for their neural network research, which has been key to the creation of technologies including OpenAI’s ChatGPT and Google’s Bard chatbot. 

As one of the key thinkers in A.I., Hinton sees the current moment as “pivotal” and ripe with opportunity. In an interview with CBS in March, Hinton said he believes that A.I. innovations are outpacing our ability to control it—and that’s a cause for concern.

“It’s very tricky things. You don’t want some big for-profit companies to decide what is true,” he told CBS Mornings in an interview in March. “Until quite recently, I thought it was going to be like 20 to 50 years before we have general purpose A.I. And now I think it may be 20 years or less.” 

Hinton added that we could be close to computers being able to come up with ideas to improve themselves. “That’s an issue, right? We have to think hard about how you control that.” 

Hinton said that Google is going to be a lot more careful than Microsoft when it comes to training and presenting A.I.-powered products and cautioning users about the information shared by chatbots. Google has been at the helm of A.I. research for a long time—well before the recent generative A.I. wave caught on. Sundar Pichai, CEO of Google parent Alphabet, has famously likened A.I. to other innovations that have shaped humankind.

“I’ve always thought of A.I. as the most profound technology humanity is working on—more profound than fire or electricity or anything that we’ve done in the past,” Pichai said in an interview aired in April. Just like humans learned to skillfully harness fire despite its dangers, Pichai thinks humans can do the same with A.I.

“It gets to the essence of what intelligence is, what humanity is,” Pichai said. “We are developing technology which, for sure, one day will be far more capable than anything we’ve ever seen before.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.