Four weeks ago in San Francisco’s Chinatown, an empty self-driving taxi was mobbed and set on fire. It’s still not clear whether the crowd attacked the Waymo robot car – billed as “the future of transportation” – out of some broader irritation with a Californian tech elite seen as threatening jobs, anger over autonomous vehicles causing accidents, or just because it blundered into the middle of crowds celebrating lunar new year.
But either way, understanding how people really feel about the rapid advance of potentially dystopian technologies seems more urgent than it did, in a week that saw British parliamentarians debating the introduction of self-driving cars on British streets, while Jeremy Hunt announced funding for police to use drones as first responders to some 999 calls.
It’s not exactly RoboCop, but flying cameras over an accident or crime scene raises some tricky questions nonetheless. How would an angry crowd at a protest react to a drone whirring overhead capturing evidence? Does a real live human arriving on the scene of a car crash offer valuable reassurance, even if it’s not necessarily the best use of police time? This is only the beginning of what looks like a potentially seismic shift in the state’s relationship with AI, with serious implications for vulnerable people relying on public services and for workers whose public sector jobs may eventually be automated out from under them.
The deputy prime minister, Oliver Dowden, calls AI a “silver bullet” in the eternal Tory quest to shrink the state, and presumably free up money for tax cuts. Though Labour is keener to talk up potential benefits for the NHS, with some AI tools now better than humans at reading cancer scans, it won’t be blind to the potential savings offered by automating routine administrative work – or to big tech’s wider potential to drive desperately needed economic growth. For politicians set on improving public services without hiking taxes, AI is the obvious straw to grasp for, but there are risks as well as benefits to relying on the Elon Musks of this world. What’s surprising, in a general election year, is the lack of honest public debate about them. Which brings me to AI Needs You, a timely and fascinating new book by the former Downing Street aide-turned-tech executive Verity Harding, which argues that it’s high time the public got a say on what kind of world we actually want to live in.
When she quit her job advising Nick Clegg just over a decade ago to work for Google’s DeepMind AI lab, Harding admits most of her colleagues couldn’t understand her interest in something so seemingly nerdy and niche. She could be enjoying the last laugh now from California, but instead she’s back in Britain, running an academic project at Cambridge University on regulating AI for the global good, and increasingly urgently banging a drum for stronger political leadership over something capable of turning jobs, lives and societies upside down if we let it.
What frustrates her most is the widespread assumption that the genie is now out of the bottle, leaving society to roll with the consequences of whatever a handful of tech billionaires decides to unleash next. “We should be thinking: ‘What do we want and how do we use technology?’, not ‘What technology is coming that we just have to put up with’,” she told me.
The book draws comparisons with the way John F Kennedy took charge of the space race (he used the United States’ moonshot not merely to advance scientific research or inspire the public, but to show a frightened cold war Europe that liberal democracies could still outstrip mighty authoritarian Russia), and with Britain’s approach in the 1980s to the emerging science of IVF, which was novel and morally complex at the time. The principles devised by the philosopher Mary Warnock for governing embryology, reflecting the human and social consequences of making test tube babies as well as the science, became a model for governments worldwide. Both examples suggest we could have more choices and control than we think over AI, Harding argues, so long as we recognise that good things don’t happen by accident.
That means tackling the antisocial uses of AI, which include the convincing “deepfake” images of real people used in pornography, and political disinformation. But it will also require nudging markets towards socially useful outcomes. Why, Harding asks, aren’t we harnessing the incredible power of AI to help solve the climate crisis? Why do we act as if humanity is helpless to control something it’s actively inventing? Many of the things we fear most about AI, she argues, are really just traits we dislike in ourselves. This is unsurprising, given that AI is trained on human data and mimics human thinking. But we can use that insight to ensure AI reflects the best of us, not the worst. In the meantime, she recommends not believing the wildest industry hype about what tools still in their infancy are reliably capable of doing.
Harding’s approach demands vision and confidence from politicians at a time when many are wary of challenging the tech industry, increasingly seen – much as Facebook or Amazon were in the 2010s, and the City was before that – as the economic goose now best placed to lay golden eggs. Chancellors never want to stifle the next big hope for growth, and prime ministers always fear being left behind in an international arms race by countries willing to apply a lighter regulatory touch. Rishi Sunak’s fawning encounter with Musk at last year’s government AI summit, where the latter blithely declared that human work would eventually become redundant, felt horribly like an insight into where power really lies.
More prosaically, tech companies are generous hirers of political talent, from Harding’s old boss Clegg (now a senior executive at Meta) downwards. How many Tory MPs and special advisers currently facing political oblivion want to upset people they might soon be begging for a job?
Harding, who knows these two incestuous worlds better than most, is right, however, that this extraordinary chapter of human history doesn’t have to end in catastrophe, or in angry mobs rising up against a tech elite perceived as having gone too far. But only perhaps if we all understand that we have more agency than we think; that the nerdy wizards yanking levers behind Silicon Valley’s curtain aren’t quite as omnipotent as they seem, that AI is still our servant not our master, and that the point of politics is to shape events, not to flap around limply in their wake. For now, at least, there’s still power in being human. But only if we use it.
Gaby Hinsliff is a Guardian columnist