Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Technology
Dan Milmo Global technology editor

UK needs system for recording AI misuse and malfunctions, thinktank says

Google AI logo displayed on a smartphone with Gemini in the background
Among ‘safety incidents’ compiled by the OECD was Google’s Gemini portraying German second world war soldiers as people of colour. Photograph: Jonathan Raa/NurPhoto/REX/Shutterstock

The UK needs a system for recording misuse and malfunctions in artificial intelligence or ministers risk being unaware of alarming incidents involving the technology, according to a report.

The next government should create a system for logging incidents involving AI in public services and should consider building a central hub for collating AI-related episodes across the UK, said the Centre for Long-Term Resilience (CLTR), a thinktank.

CLTR, which focuses on government responses to unforeseen crises and extreme risks, said an incident reporting regime such as the system operated by the Air Accidents Investigation Branch (AAIB) was vital for using the technology successfully.

The report cites 10,000 AI “safety incidents” recorded by news outlets since 2014, listed in a database compiled by the Organisation for Economic Co-operation and Development, an international research body. The OECD’s definition of a harmful AI incident ranges from physical harm to economic, reputational and psychological harms.

Examples logged on the OECD’s AI safety incident monitor include a deepfake of the Labour leader, Keir Starmer, purportedly being abusive to party staff, Google’s Gemini model portraying German second world war soldiers as people of colour, incidents involving self-driving cars and a man who planned to assassinate the late queen drawing encouragement from a chatbot.

“Incident reporting has played a transformative role in mitigating and managing risks in safety-critical industries such as aviation and medicine. But it’s largely missing from the regulatory landscape being developed for AI. This is leaving the UK government blind to the incidents that are emerging from AI’s use, inhibiting its ability to respond,” said Tommy Shaffer Shane, a policy manager at CLTR and the report’s author.

CLTR said the UK government should follow the example of industries where safety is a critical issue, such as in aviation and medicine, and introduce a “well-functioning incident reporting regime”. CLTR said many AI incidents would probably not be covered by UK watchdogs because there was no regulator focused on cutting-edge AI systems such as chatbots and image generators. Labour has pledged to introduce binding regulation for the most advanced AI companies.

Such a setup would provide quick insights into how AI was going wrong, said the thinktank, and help the government anticipate similar incidents in the future. It added that incident reporting would help coordinate responses to serious incidents where speed of response was crucial and identify initial signs of large-scale harms that could happen in the future.

Some models may only show harms once they are fully released, despite being tested by the UK’s AI Safety Institute, with incident reporting at least allowing the government to see how well the country’s regulatory setup is addressing those risks.

CLTR said the Department for Science, Innovation and Technology (DSIT) risked lacking an up-to-date picture of misuse of AI systems such as disinformation campaigns, attempted development of bioweapons, bias in AI systems or misuse of AI in public services, like in the Netherlands where tax authorities plunged thousands of families into financial distress after deploying an AI program in a misguided attempt to tackle benefits fraud.

“DSIT should prioritise ensuring that the UK government finds out about such novel harm not through the news, but through proven processes of incident reporting,” said the report.

CLTR, which is largely funded by the wealthy Estonian computer programmer Jaan Tallinn, recommended three immediate steps: creating a government system to report AI incidents in public services; ask UK regulators to find gaps in AI incident reporting; and consider creating a pilot AI incident database, which could collect AI-related episodes from existing bodies such as the AAIB, the Information Commissioner’s Office and the medicines regulator the MHRA.

CLTR said the reporting system for AI use in public services could build on the existing algorithmic transparency reporting standard, which encourages departments and police authorities to reveal AI use.

In May, 10 countries including the UK, plus the EU, signed a statement on AI safety cooperation that included monitoring “AI harms and safety incidents”.

The report added that an incident report system would also help the DSIT’s Central AI Risk Function body [CAIRF], which assesses and reports on AI-associated risks.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.