Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Technology
Alex Hern in Seoul

Seoul summit showcases UK’s progress on trying to make advanced AI safe

Michelle Donelan, the UK technology secretary listens as South Korea's minister of science and ICT, Lee Jong-ho speaks at a press briefing
Michelle Donelan, the UK technology secretary listens as South Korea's minister of science and ICT, Lee Jong-ho, speaks at a press briefing in Seoul on 22 May. Photograph: Anthony Wallace/AFP/Getty Images

The UK is leading an international effort to test the most advanced AI models for safety risks before they hit the public, as regulators race to create a workable safety regime before the Paris summit in six months.

Britain’s AI Safety Institute, the first of its kind, is now matched by counterparts from around the world, including South Korea, the US, Singapore, Japan and France.

Regulators at the Seoul AI Summit hope the bodies can collaborate to create the 21st-century version of the Montreal Protocol, the groundbreaking agreement to control CFCs and close the hole in the ozone layer.

But before they do, the institutes need to agree on how they can work together to turn an international patchwork of approaches and regulations into a unified effort to corral AI research.

“At Bletchley, we announced the UK’s AI Safety Institute – the world’s first government-backed organisation dedicated to advanced AI safety for the public good,” said Michelle Donelan, the UK technology secretary, in Seoul on Wednesday. She credited the “Bletchley effect” for prompting the creation of a global network of peers doing the same thing.

Those institutes will begin sharing information about models, their limitations, capabilities and risks, as well as monitoring specific “AI harms and safety incidents” where they occur and sharing resources to advance global understanding of the science of AI safety.

At the first “full house” meeting of those countries on Wednesday, Donelan warned the creation of the network was only a first step. “We must not rest on our laurels. As the pace of AI development accelerates, we must match that speed with our own efforts if we are to grip the risks and seize the limitless opportunities for our public.”

The network of safety institutes have a hard deadline. This autumn, leaders will again meet, this time in Paris, for the first full AI summit since Bletchley. There, if conversation is to progress from how to test AI models to how to regulate them, the safety institutes will have to demonstrate that they have mastered what Donalan called “the nascent science of frontier AI testing and evaluation”.

Jack Clark, the co-founder and head of policy at AI lab Anthropic, said that simply setting up a functional safety institute places the UK “a hundred miles” further along the road to safe AI than the world was two years ago.

“I think what we now need to do is to encourage governments, as I’ve been doing here, to continue to invest the money required to set up the safety institutes and fill them with enough technical people that they really can create their own information and evidence,” he said.

As part of the investment into that science, Donelan announced £8.5m in funding to “break new grounds” in AI safety testing.

Francine Bennett, interim director of the Ada Lovelace Institute, called that funding a good start and said it would need to “pave the way for a much more substantial programme of understanding and protecting against social and systemic risk.

“It’s great to see the safety institute and the government taking steps towards a broader view of what safety means, both in the State of the Science report and with this funding; we’re recognising that safety isn’t something you can sufficiently test for in a lab,” Bennett added.

The summit was criticised for leaving key voices out of the conversation. No Korean civil society groups were present, with the host country representing itself only through academia, government and industry, while only the largest AI businesses were invited to take part. Roeland Decorte, president of the AI Founders Association, warned that the discussions risked “focusing only on flashy large-scale models, of which only a handful will come to dominate and which can only be created currently by the biggest players at a financial loss” as a result.

“The question is, in the end, do we want to regulate and build for a future mature AI economy that will create a sustainable framework for the majority of companies operating in the space,” he added.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.