Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Technology
Kiran Stacey

UK risks scandal over ‘bias’ in AI tools in use across public sector

Jobcentre graphic AI
The DWP said in response to a FoI request that it could not reveal details of how the algorithm works in case it helps people game the system. Composite: Guardian Design/EPA

Kate Osamor, the Labour MP for Edmonton, recently received an email from a charity about a constituent of hers who had had her benefits suspended apparently without reason.

“For well over a year now she has been trying to contact DWP [the Department for Work and Pensions] and find out more about the reason for the suspension of her UC [Universal Credit], but neither she nor our casework team have got anywhere,” the email said. “It remains unclear why DWP has suspended the claim, never mind whether this had any merit … she has been unable to pay rent for 18 months and is consequently facing eviction proceedings.”

Osamor has been dealing with dozens of such cases in recent years, often involving Bulgarian nationals. She believes they have been victims of a semi-automated system that uses an algorithm to flag up potential benefits fraud before referring those cases to humans to make a final decision on whether to suspend people’s claims.

“I was contacted by dozens of constituents around the beginning of 2022, all Bulgarian nationals, who had their benefits suspended,” Osamor said. “Their cases had been identified by the DWP’s Integrated Risk and Intelligence Service as being high risk after carrying out automated data analytics.

“They were left in destitution for months, with no means of appeal. Yet, in almost all cases, no evidence of fraud was found and their benefits were eventually restored. There was no accountability for this process.”

The DWP has been using AI to help detect benefits fraud since 2021. The algorithm detects cases that are worthy of further investigation by a human and passes them on for review.

In response to a freedom of information request by the Guardian, the DWP said it could not reveal details of how the algorithm works in case it helps people game the system.

The department said the algorithm does not take nationality into account. But because these algorithms are self-learning, no one can know exactly how they do balance the data they receive.

The DWP said in its latest annual accounts that it monitored the system for signs of bias, but was limited in its capacity to do so where it had insufficient user data. The public spending watchdog has urged it to publish summaries of any internal equality assessments.

Shameem Ahmad, the chief executive of the Public Law Project, said: “In response to numerous Freedom of Information Act requests, and despite the evident risks, the DWP continues to refuse to provide even basic information on how these AI tools work, such as who they are being tested on, or whether the systems are working accurately.”

The DWP is not the only department using AI in a way that can have major impacts on people’s daily lives. A Guardian investigation has found such tools in use in at least eight Whitehall departments and a handful of police forces around the UK.

The Home Office has a similar tool to detect potential sham marriages. An algorithm flags marriage licence applications for review to a case worker who can then approve, delay or reject the application.

The tool has allowed the department to process applications much more quickly. But its own equality impact assessment found it was flagging a disproportionately high number of marriages from four countries: Greece, Albania, Bulgaria and Romania.

The assessment, which has been seen by the Guardian, found: “Where there may be indirect discrimination it is justified by the overall aims and outcomes of the process.”

Several police forces are also using AI tools, especially to analyse patterns of crime and for facial recognition. The Metropolitan police have introduced live facial recognition cameras across London in order to help officers detect people on its “watchlist”.

But just like other AI tools, there is evidence the Met’s facial recognition systems can lead to bias. A review carried out this year by the National Physical Laboratory found that under most conditions, the cameras had very low error rates, and errors were evenly spread over different demographics.

When the sensitivity settings were dialled down however, as they might be in an effort to catch more people, they falsely detected at least five times more black people than white people.

The Met did not respond to a request for comment.

West Midlands police, meanwhile, are using AI to predict potential hotspots for knife crime and car theft, and are developing a separate tool to predict which criminals might become “high harm offenders”.

These examples are those about which the Guardian was able to find out most information.

In many cases, departments and police forces used an array of exemptions to freedom of information rules to avoid publishing details of their AI tools.

Some worry the UK could be heading for a scandal similar to that in the Netherlands, where tax authorities were found to have breached European data rules, or in Australia, where 400,000 people were wrongly accused of giving authorities incorrect details about their income.

John Edwards, the UK’s information commissioner, said he had examined many AI tools being used in the public sector, including the DWP’s fraud detection systems, and not found any to be in breach of data protection rules: “We have had a look at the DWP applications and have looked at AI being used by local authorities in relation to benefits. We have found they have been deployed responsibly and there has been sufficient human intervention to avoid the risk of harm.”

However, he added that facial recognition cameras were a source of concern. “We are watching with interest the developments of live facial recognition,” he said. “It is potentially intrusive and we are monitoring that.”

Some departments are trying to be more open about how AI is being used in the public sphere. The Cabinet Office is putting together a central database of such tools, but it is up to individual departments whether to include their systems or not.

In the meantime, campaigners worry that those on the receiving end of AI-informed decision-making are being harmed without even realising.

Ahmad warned: “Examples from other countries illustrate the catastrophic consequences for affected individuals, governments, and society as a whole. Given the lack of transparency and regulation, the government is setting up the precise circumstances for it to happen here, too.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.