The government’s use of artificial intelligence (AI) risks producing discriminatory results against benefit claimants and ethnic minorities, an investigation has found.
A total of eight Whitehall departments and some police forces are using the burgeoning technology to make life-altering decisions for members of the public, reported the Guardian.
In one case, Labour MP Kate Osamor claimed that an algorithm used by the Department of Work and Pensions (DWP) to detect fraud may have led to dozens of Bulgarians having their benefits suspended.
Meanwhile, an internal Home Office evaluation seen by the Guardian showed that an algorithm used to indicate sham marriages disproportionately singled out people from Albania, Greece, Romania and Bulgaria.
Several police forces are using AI tools and facial recognition cameras for surveillance and to predict and prevent future crimes. The investigation claims that when the sensitivity settings are dialled down on the cameras – as they may be in an effort to catch more criminals – they incorrectly detected at least five times more black people than white people.
The findings come as the UK prepares to host an international summit on AI at Bletchley Park. The event is viewed as a means for the UK to stamp its authority on AI regulation, and grapple with the existential threat some luminaries, including Elon Musk, believe it poses.
But while the summit focuses on the headline-grabbing future of the technology, Britain is already harnessing AI in many areas that affect the lives of everyday people.
The wide-ranging use of the AI in the public sector was uncovered after the Cabinet Office began encouraging departments and law enforcement to voluntarily disclose their use of the tech, specifically when it could have a material impact on the general public.
A separate database compiled by the Public Law Project also tracks the automated tools used by the government and ranks them based on transparency.
Experts and tech insiders have repeatedly warned that AI can reinforce biases that are engrained in the datasets used to train the systems. After pressure from rights groups over the dangers of predictive policing and face recognition surveillance, the EU passed a landmark AI law earlier this year banning the systems.
The DWP told the Guardian that its algorithm does not take nationality into account. And both the DWP and the Home Office insisted that the processes they use are fair because the final decisions are made by people. The Met did not respond to the findings.
John Edwards, the UK’s information commissioner, said he had examined many AI tools being used in the public sector, including the DWP’s fraud detection systems, and not found any to be in breach of data protection rules.