Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Technology
Dan Milmo Global technology editor

Google chief warns AI could be harmful if deployed wrongly

Sundar Pichai
Sundar Pichai, the chief executive of Google, says the economic impact of artificial intelligence will be significant. Photograph: Sajjad Hussain/AFP/Getty Images

Google’s chief executive has said concerns about artificial intelligence keep him awake at night and that the technology can be “very harmful” if deployed wrongly.

Sundar Pichai also called for a global regulatory framework for AI similar to the treaties used to regulate nuclear arms use, as he warned that the competition to produce advances in the technology could lead to concerns about safety being pushed aside.

In an interview on CBS’s 60 minutes programme, Pichai said the negative side to AI gave him restless nights. “It can be very harmful if deployed wrongly and we don’t have all the answers there yet – and the technology is moving fast. So does that keep me up at night? Absolutely,” he said.

Google’s parent, Alphabet, owns the UK-based AI company DeepMind and has launched an AI-powered chatbot, Bard, in response to ChatGPT, a chatbot developed by the US tech firm OpenAI, which has become a phenomenon since its release in November.

Pichai said governments would need to figure out global frameworks for regulating AI as it developed. Last month, thousands of artificial intelligence experts, researchers and backers – including the Twitter owner Elon Musk – signed a letter calling for a pause in the creation of “giant” AIs for at least six months, amid concerns that development of the technology could get out of control.

Asked if nuclear arms-style frameworks could be needed, Pichai said: “We would need that.”

The AI technology behind ChatGPT and Bard, known as a Large Language Model, is trained on a vast trove of data taken from the internet and is able to produce plausible responses to prompts from users in a range of formats, from poems to academic essays and software coding. The image-generating equivalent, in systems such as Dall-E and Midjourney, has also triggered a mixture of astonishment and alarm by producing realistic images such as the pope sporting a puffer jacket.

Pichai added that AI could cause harm through its ability to produce disinformation. “It will be possible with AI to create, you know, a video easily. Where it could be Scott [Pelley, the CBS interviewer] saying something, or me saying something, and we never said that. And it could look accurate. But you know, on a societal scale, you know, it can cause a lot of harm.”

The Google chief added that the version of its AI technology now available to the public, via the Bard chatbot, was safe. He added that Gooogle was being responsible by holding back more advanced versions of Bard for testing.

Pichai’s comments came as the New York Times reported on Sunday that Google was building a new AI-powered search engine in response to Microsoft’s rival service Bing, which has been integrated with the chatbot technology behind ChatGPT.

Pichai admitted that Google did not fully understand how its AI technology produced certain responses.

“There is an aspect of this which we call, all of us in the field call it as a ‘black box’. You know, you don’t fully understand. And you can’t quite tell why it said this, or why it got wrong.”

Asked by the CBS journalist Scott Pelley why Google had released Bard publicly when he didn’t fully understand how it worked, Pichai replied: “Let me put it this way. I don’t think we fully understand how a human mind works either.”

Pichai admitted that society did not appear to be ready for rapid advances in AI. He said there “seems to be a mismatch” between the pace at which society thinks and adapts to change compared with the pace at which AI was evolving. However, he added that at least people have become alert to its potential dangers more quickly.

“Compared to any other technology, I’ve seen more people worried about it earlier in its life cycle. So I feel optimistic,” he said.

Pichai said the economic impact of AI would be significant because it would impact everything. He added: “This is going to impact every product across every company and so that’s why I think it’s a very, very profound technology.”

Using a medical example, Pichai said in five to 10 years a radiologist could be working with an AI assistant to help prioritise cases. He added that “knowledge workers” such as writers, accountants, architects and software engineers would be affected.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.