The UK should bar technology developers from working on advanced artificial intelligence tools unless they have a licence to do so, Labour has said.
Ministers should introduce much stricter rules around companies training their AI products on vast datasets of the kind used by OpenAI to build ChatGPT, Lucy Powell, Labour’s digital spokesperson, told the Guardian.
Her comments come amid a rethink at the top of government over how to regulate the fast-moving world of AI, with the prime minister, Rishi Sunak, acknowledging it could pose an “existential” threat to humanity.
One of the government’s advisers on artificial intelligence also said on Monday that humanity could have only two years before AI is able to outwit people, the latest in a series of stark warnings about the threat posed by the fast-developing technology.
Powell said: “My real point of concern is the lack of any regulation of the large language models that can then be applied across a range of AI tools, whether that’s governing how they are built, how they are managed or how they are controlled.”
She suggested AI should be licensed in a similar way to medicines or nuclear power, both of which are governed by arms-length governmental bodies. “That is the kind of model we should be thinking about, where you have to have a licence in order to build these models,” she said. “These seem to me to be the good examples of how this can be done.”
The UK government published a white paper on AI two months ago, which detailed the opportunities the technology could bring, but said relatively little about how to regulate it.
Since then, a range of developments, including advances in ChatGPT and a series of stark warnings from industry insiders, have caused a rethink at the top of government, with ministers now hastily updating their approach. This week Sunak will travel to Washington DC, where he will argue that the UK should be at the forefront of international efforts to write a new set of guidelines to govern the industry.
Labour is also rushing to finalise its own policies on advanced technology. Powell, who will give a speech to industry insiders at the TechUK conference in London on 6 June, said she believed the disruption to the UK economy could be as drastic as the deindustrialisation of the 1970s and 1980s.
Keir Starmer, the Labour leader, is expected to give a speech on the subject during London Tech Week next week. Starmer will hold a shadow cabinet meeting in one of Google’s UK offices next week, giving shadow ministers a chance to speak to some of the company’s top AI executives.
Powell said that rather than banning certain technologies, as the EU has done with tools such as facial recognition, she thought the UK should focus on regulating the way in which they are developed.
Products such as ChatGPT are built by training algorithms on vast banks of digital information. But experts warn that if those datasets contain biased or discriminatory data, the products themselves can show evidence of those biases. This could have a knock-on effect, for example, on employment practices if AI tools are used to help make hiring and firing decisions.
Powell said: “Bias, discrimination, surveillance – this technology can have a lot of unintended consequences.”
She argued that by forcing developers to be more open about the data they are using, governments could help mitigate those risks. “This technology is moving so fast that it needs an active, interventionist government approach, rather than a laissez-faire one.”
Matt Clifford, the chair of the Advanced Research and Invention Agency, which the government set up last year, said on Monday that AI was evolving much faster than most people realised. He said it could already be used to launch bioweapons or large-scale cyber-attacks, adding that humans could rapidly be surpassed by the technology they had created.
Speaking to TalkTV’s Tom Newton Dunn, Clifford said: “It’s certainly true that if we try and create artificial intelligence that is more intelligent than humans and we don’t know how to control it, then that’s going to create a potential for all sorts of risks now and in the future. So I think there’s lots of different scenarios to worry about but I certainly think it’s right that it should be very high on the policymakers’ agendas.”
Asked when that could happen, he added: “No one knows. There are a very broad range of predictions among AI experts. I think two years will be at the very most sort of bullish end of the spectrum.”