Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
National
Robert Booth UK technology editor

‘AI’ tool could influence Home Office immigration decisions, critics say

Reflective signs at the Home Office headquarters
The government says the system delivers efficiencies by prioritising work and that a human remains responsible for each decision. Photograph: Peter Macdiarmid/Getty Images

A Home Office artificial intelligence tool which proposes enforcement action against adult and child migrants could make it too easy for officials to rubberstamp automated life-changing decisions, campaigners have said.

As new details of the AI-powered immigration enforcement system emerged, critics called it a “robo-caseworker” that could “encode injustices” because an algorithm is involved in shaping decisions, including returning people to their home countries.

The government describes it as a “rules-based” rather than AI system, as it does not involve machine-learning from data, and insists it delivers efficiencies by prioritising work and that a human remains responsible for each decision. The system is being used amid a rising caseload of asylum seekers who are subject to removal action, currently about 41,000 people.

Migrant rights campaigners called for the Home Office to withdraw the system, claiming it was “technology being used to make cruelty and harm more efficient”.

A glimpse into the workings of the largely opaque system has become possible after a year-long freedom of information battle, in which redacted manuals and impact assessments were released to the campaign group Privacy International. They also revealed that people whose cases are being processed by the algorithm are not specifically told that AI is involved.

The system is one of several AI programmes UK public authorities are deploying as officials seek greater speed and efficiency. There are calls for greater transparency about government AI use in fields ranging from health to welfare.

The secretary of state for science, Peter Kyle, said AI had “incredible potential to improve our public services … but, in order to take full advantage, we need to build trust in these systems”.

The Home Office disclosures show the Identify and Prioritise Immigration Cases (IPIC) system is fed an array of personal information about people who are the subject of potential enforcement action, including biometric data, ethnicity and health markers and data about criminal convictions.

The purpose is “to create an easier, faster and more effective way for immigration enforcement to identify, prioritise and coordinate the services/interventions needed to manage its caseload”, the documents state.

But Privacy International said it feared the system was set up in a way that would lead to human officials “rubberstamping” the algorithm’s recommendations for action on a case “because it’s so much easier … than to look critically at a recommendation and reject it”.

For officials to reject a proposed decision on “returns” – sending people back to their home country – they must give a written explanation and tick boxes relating to the reasons. But to accept the computer’s verdict, no explanation is required and the official clicks one button marked “accept’ and confirms the case has been updated on other Home Office systems, the training manuals show.

Asked if this introduced a bias in favour of accepting the AI decision, the Home Office declined to comment.

Officials describe IPIC as a rules-based workflow tool that delivers efficiencies for immigration enforcement by recommending to caseworkers the next case or action they should consider. They stressed that every recommendation made in the IPIC system was reviewed by a caseworker who was required to weigh it on its individual merits. The system is also being deployed on cases of EU nationals seeking to remain in the UK under the EU settlement scheme.

Jonah Mendelsohn, a lawyer at Privacy International, said the Home Office tool could affect the lives of hundreds of thousands of people.

“Anyone going through the migration system currently has no way of knowing how the tool has been used in their case and if it is putting them at risk of wrongful enforcement action,” he said. “Without changes to ensure algorithmic transparency and accountability, the Home Office’s pledge to be ‘digital by default’ by 2025 will further encode injustices into the immigration system.”

Fizza Qureshi, the chief executive of the Migrants’ Rights Network, called for the tool to be withdrawn and raised concerns the AI could lead to racial bias.

“There is a huge amount of data that is input into IPIC that will mean increased data-sharing with other government departments to gather health information, and suggests this tool will also be surveilling and monitoring migrants, further invading their privacy,” she said.

IPIC has been in widespread operation since 2019-20. The Home Office refused previous freedom of information inquiries because greater openness “could be used to circumvent immigration controls by providing insight into how work in the Home Office and immigration enforcement is triaged”.

Madeleine Sumption, the director of the Migration Observatory at the University of Oxford, said the use of AI in the immigration system was not inherently wrong, because in theory AI could improve human decision-making rather than replace it.

She said: “The government might well be able to make the case AI is leading to better decision-making and reducing unnecessary detention but without greater transparency we can’t know.”

For example if a country such as Iran is unlikely to accept deported nationals, pursuing such cases could be considered a waste of limited enforcement resources. Or if a person’s argument to remain is underpinned by human rights law meaning they are unlikely to be quickly deported, it may be better to prioritise other removals and in doing so avoid taking people into indefinite detention.

Home Office documents say the tool is used to “assess the removability and level of harm posed by immigration offenders, automate the identification and prioritisation of cases, and to provide information on the length of time a barrier to removal has been in place”.

A new draft data bill was introduced for debate in the UK parliament last month that “would effectively permit automated decision-making in most circumstances”, according to lawyers. This would be allowed as long as individuals affected can make representations, obtain meaningful human intervention and challenge automated decisions.

• This article was amended on 12 November 2024 to reflect in the headline and through an addition to the third paragraph that the Home Office does not consider its IPIC system to be artificial intelligence because the tool does not learn from the data it receives; AI systems may, however, be described in terms of those that are rules-based and those that use machine-learning.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.