Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Tristan Bove

A.I. could rival human intelligence in ‘just a few years,’ says CEO of Google’s main A.I. research lab

DeepMind CEO Demis Hassabis (Credit: Joy Malone—Getty Images)

While artificial intelligence like OpenAI’s ChatGPT promises to revolutionize every aspect of the economy, the true holy grail for A.I. researchers—artificial general intelligence, or AGI—has been elusive. It’s when a machine can understand the world at least as well as a human. Experts say it will take a few years to a few decades to achieve the goal, if it can even truly exist. The head of Google’s main A.I. research lab, for one, leans toward sooner rather than later. 

“I think we’ll have very capable, very general systems in the next few years,” Demis Hassabis, CEO of DeepMind, a subsidiary of Google parent Alphabet, said Tuesday during a conference hosted by the Wall Street Journal

“The progress in the last few years has been pretty incredible,” he continued. “I don’t see any reason why that progress is going to slow down. I think it may even accelerate. So I think we could be just a few years, maybe within a decade away.”

A.I. has been the exception to the tech sector’s downturn over the past year, with companies and venture capital firms doubling down on the technology. Much of that money is flowing to develop generative A.I., algorithms that can generate text or images based on specific prompts.  

Yet research, which looks even farther ahead, is already moving on from generative A.I. to AGI, according to 56% of computer scientists and A.I. researchers polled in a Stanford University survey last month. AGI would be significantly more sophisticated because it would be aware of what it says and does. Theoretically, it would be far less likely to spit out the inaccuracies and confused statements that tarnish the image of the current technology, which relies on large amounts of training data and predictive models to help it anticipate the most likely answer to what it’s asked to do. 

DeepMind, acquired by Google in 2014, made headlines in 2021 when it publicized the results of its A.I. system AlphaFold, which predicted the structure of every known protein in the human genome. The milestone has huge implications for disease and medicinal research. And last year, the company made headlines again by analyzing the structure of almost every protein known to science, releasing over 200 million predictions in a free-to-access database. DeepMind has also developed A.I. that is capable of diagnosing complex eye diseases; systems that cut its parent company’s energy bills by 40%; and software that turns text into speech

Last month, Google announced it was integrating its core A.I. research team with DeepMind and appointing Hassabis as the combined unit’s CEO to accelerate the company’s push toward AGI. Google and Alphabet CEO Sundar Pichai said in a statement that the overhaul would “help power the next generation of our products and services.”

Fast-tracked AGI development is not without its critics, however. Around 58% of A.I. experts called AGI an “important concern” in the Stanford survey, with 36% saying it could lead to a “nuclear-level catastrophe.” Some experts said AGI could represent a technological singularity, a hypothetical future moment when machines irreversibly surpass human abilities and could pose a threat to civilization.

Over 1,000 technologists and A.I. experts including Elon Musk and Apple cofounder Steve Wozniak signed an open letter in March calling for a six-month halt on advanced A.I. development to reprioritize ethics research.  One of the signatories’ concerns was dangers from unrestrained superhuman A.I. 

Other observers have argued that advanced A.I. could cause severe damage to society if it is not reined in and aligned with human values. Paul Christiano, an A.I. researcher who formerly led safety and alignment work at OpenAI, cautioned last week that unchecked AGI development could lead to a “full-blown A.I. takeover scenario” that destroys humanity. Yuval Harari, author of the popular science book Sapiens and one of the open letter’s signatories, also wrote in The Economist last week that A.I. was akin to a poorly understood “alien intelligence, here on Earth” that could represent the “end of democracy” if used unethically.

Speaking on stage Tuesday, Hassabis promised DeepMind’s work on AGI won’t put civilization at risk anytime soon, as the company’s current priority is to integrate A.I. with more products, which he predicted would be as game-changing as the first iPhone. 

Hassabis also said Google would responsibly develop AGI, and asserted his view that the technology should be used in “a cautious manner using the scientific method where you try and do very careful controlled experiments to understand what the underlying system does.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.