Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
World
Robert Booth Social affairs correspondent

Monitoring UK bank accounts for benefits fraud would be ‘huge blow to privacy’

A person using a laptop while holding a bank card.
Keir Starmer last week announced a fraud, error and debt bill to make banks share data on account holders that ‘may show indications of potential benefit overpayments’. Photograph: Tim Goode/PA

Ministers have been urged not to resurrect Conservative plans to tackle welfare fraud by launching mass algorithmic surveillance of bank accounts.

Disability rights, poverty, pensioner and privacy groups fear the government is poised to deliver a “snooper’s charter” by using automation and possibly artificial intelligence to crack down on benefit cheats and mistakes which cost £10bn a year. They fear it will mean a “huge blow for privacy in the UK”.

In a letter this week to Liz Kendall, the secretary of state for work and pensions, they said requiring banks to scan accounts for suspicious behaviour would be a severe “intrusion into the nation’s privacy, with potentially punitive consequences for vulnerable individuals”.

Keir Starmer announced a fraud, error and debt bill last week to make banks share data on account holders that “may show indications of potential benefit overpayments”.

Details are yet to be published, but the Department for Work and Pensions (DWP) stressed that the government would not have access to people’s bank accounts and would not use artificial intelligence to look into data. It said it would be able to request data from banks to indicate where a customer may not meet eligibility rules for benefits, and that if there were a signal of fraud or error, a member of staff would always investigate it.

The government is concerned welfare fraud is becoming more sophisticated and without new legal powers it cannot keep pace with the changing nature of fraud to tackle it robustly enough. It believes asking banks to share claimants’ data with the Department for Work and Pensions to help it tackle benefit fraud could help save £1.6bn over five years.

The previous Conservative bill did not make it through parliament before the July general election. Aiming to increase public and business confidence in AI tools, it was welcomed by some, including the technology industry and the information commissioner.

It also aimed to facilitate the flow and use of personal data for law enforcement and national security purposes. Aspects of the bill which focused on privacy rights and automated decision-making were strongly contested.

Labour’s new bill could compel banks and other third parties to trawl the accounts of the entire population to target welfare recipients for monitoring. By its own estimation it would stop only about 3% of the total amount lost to fraud and error.

Such mass financial surveillance powers would be “disproportionate”, according to the signatories of the letter to Kendall, which included leaders of Disability Rights UK, Age UK, Privacy International, Child Poverty Action Group and Big Brother Watch.

“Imposing suspicionless algorithmic surveillance on the entire public has the makings of a Horizon-style scandal – with vulnerable people most likely to bear the brunt when these systems go wrong,” they wrote to Kendall, referring to the Post Office software that resulted in the wrongful imprisonment of post office operators. “Pensioners, disabled people, and carers shouldn’t have to live in fear of the government prying into their finances.”

A DWP spokesperson said: “These claims are false. These powers will be used appropriately and proportionately through robust, new oversight and reporting rules, and our staff will be trained to the highest possible standards. The information provided by banks is unrelated to DWP algorithms and any signals of potential fraud will always be looked at comprehensively by a member of staff.”

The warning comes amid widening use of artificial intelligence in government departments, with about 70% of them estimated to be piloting or planning to use AI, according to the National Audit Office spending watchdog.

Welfare algorithms are far from faultless. It emerged in the summer that DWP software had wrongly flagged more than 200,000 people for investigation for suspected fraud and error.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.