The ‘Godfather’ of AI, and one of its biggest critics, believes the technology will soon become smarter than humans and could learn to manipulate them.
Geoffrey Hinton, a former AI engineer at Google, told 60 Minutes he expected artificial intelligence to become self-aware in time, making humans the second most intelligent beings on the planet.
Humans have about 100 trillion neural connections, while the biggest AI chatbots have just 1 trillion connections, according to Hinton.
However, he suggests the knowledge contained within those connections is likely much more than that contained in humans.
Eventually, Hinton says computer systems might be able to write their own code to modify themselves, in a sense going rogue. And if it does, he thinks AI will have a way to stop itself from being switched off by humans.
“They will be able to manipulate people,” Hinton told 60 Minutes.
“These will be very good at convincing because they'll have learned from all the novels that were ever written, all the books by Machiavelli, all the political connivances. They'll know all that stuff.”
Bigger threat than climate change
Hinton quit his role as an engineer at Google in May after more than a decade with the company, in part to speak out against the growing risks of the technology and lobby for safeguards and regulations against it.
While at Google, Hinton helped build the AI’s chatbot Bard, the tech giant’s competitor to OpenAI’s ChatGPT. He also set the foundations for the growth of AI through his pioneering neural network, which helped him win a prestigious Turing Award.
Since he quit, Hinton has been one of the leading voices warning of AI's dangers. Following his resignation announcement in the New York Times, he told Reuters he thought the tech had become a bigger threat to humans than climate change.
In late May, he was at the top of a list of hundreds of experts, which included OpenAI founder Sam Altman, calling for urgent regulation of AI.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the 22-word statement read.
Hinton’s biggest worry about AI right now pertains to the labor market. He told 60 Minutes he feared a whole class of people would find themselves unemployed as more capable AI systems take their place.
In the longer run, though, he worries about AI’s militaristic potential. In his interview with 60 Minutes, Hinton called for governments to commit to not building battlefield robots. The warning is akin to J. Robert Oppenheimer’s calls to stop world leaders from developing nuclear weapons after he pioneered the first atomic bomb.
Hinton summed up by saying he couldn’t see a path that guarantees safety, adding he wasn’t sure robots could ever be stopped from wanting to take over humanity.
The world’s major governments appear to have heard Hinton’s and others’ warnings loud and clear.
The U.K. will host the first global AI summit in November, which is expected to be attended by 100 politicians, academics, and AI experts.
It could lay the groundwork for sweeping regulatory changes by major countries including the United States.
The U.S. is crafting an AI Bill of Rights, and in the coming months is expected to bring in safeguards that tech companies must abide by.
The European Union is crafting its own guardrails around AI, titled the AI Act. However, the potential for varying regulations based on geography is creating tension.
In June, more than 150 major European execs requested the EU pullback on its proposed restrictions around AI, including increased bureaucracy and tests on certain tech’s safety. They argued these would create a “critical productivity gap” in the region that would leave it trailing the U.S.