Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Evening Standard
Evening Standard
Technology
Storm Newton

AI could lead to patient harm, researchers suggest

The Royal College of Radiologists said the NHS must make greater use of artificial intelligence (Alamy/PA) -

Artificial intelligence (AI) could lead to patient harm if the development of models is focused more on accurately predicting outcomes than treatment, researchers have suggested.

Experts warned the technology could create “self-fulfilling prophecies” when trained on historic data that does not account for demographics or the under-treatment of certain medical conditions.

They added that the findings highlight the “inherent importance” of applying “human reasoning” to AI decisions.

Academics in the Netherlands looked at outcome prediction models (OPMs), which use a patient’s individual features such as health history and lifestyle information, to help medics weigh up the benefits and risks of treatment.

AI can perform these tasks in real-time to further support clinical decision-making.

The team then created mathematical scenarios to test how AI may harm patient health and suggest that these models “can lead to harm”.

“Many expect that by predicting patient-specific outcomes, these models have the potential to inform treatment decisions and they are frequently lauded as instruments for personalised, data-driven healthcare,” researchers said.

“We show, however, that using prediction models for decision-making can lead to harm, even when the predictions exhibit good discrimination after deployment.

“These models are harmful self-fulfilling prophecies: their deployment harms a group of patients, but the worse outcome of these patients does not diminish the discrimination of the model.”

The article, published in the data-science journal Patterns, also suggests the development of AI model development “needs to shift its primary focus away from predictive performance and instead toward changes in treatment policy and patient outcome”.

Reacting to the risks outlined in the study, Dr Catherine Menon, a principal lecturer at the University of Hertfordshire’s department of computer science, said: “This happens when AI models have been trained on historical data, where the data does not necessarily account for such factors as historical under-treatment of some medical conditions or demographics.

“These models will accurately predict poor outcomes for patients in these demographics.

“This creates a ‘self-fulfilling prophecy’ if doctors decide not to treat these patients due to the associated treatment risks and the fact that the AI predicts a poor outcome for them.

“Even worse, this perpetuates the same historic error: under-treating these patients means that they will continue to have poorer outcomes.

“Use of these AI models therefore risks worsening outcomes for patients who have typically been historically discriminated against in medical settings due to factors such as race, gender or educational background.

“This demonstrates the inherent importance of evaluating AI decisions in context and applying human reasoning and assessment to AI judgments.”

AI is currently used across the NHS in England to help clinicians read X-rays and CT scans to free up staff time, as well as speeding up the diagnosis of strokes.

In January, Prime Minister Sir Keir Starmer pledged that the UK will be an “AI superpower” and said the technology could be used to tackle NHS waiting lists.

Ian Simpson, a professor of biomedical informatics at the University of Edinburgh, highlighted that AI OPMs “are not that widely used at the moment in the NHS”.

“Here they tend to be used in parallel with existing clinical management policies and often either for assisting diagnostics and/or speeding up processes like image segmentation,” he said.

Ewen Harrison, a professor of surgery and data science and co-director of the centre for medical informatics at the University of Edinburgh, said: “While these tools promise more accurate and personalised care, this study highlights one of a number of concerning downsides: predictions themselves can unintentionally harm patients by influencing treatment decisions.

“Say a hospital introduces a new AI tool to estimate who is likely to have a poor recovery after knee replacement surgery. The tool uses characteristics such as age, body weight, existing health problems and physical fitness.

“Initially, doctors intend to use this tool to decide which patients would benefit from intensive rehabilitation therapy.

“However, due to limited availability and cost, it is decided instead to reserve intensive rehab primarily for patients predicted to have the best outcomes.

“Patients labelled by the algorithm as having a ‘poor predicted recovery’ receive less attention, fewer physiotherapy sessions and less encouragement overall.”

He added that this leads to a slower recovery, more pain and reduced mobility in some patients.

“These are real issues affecting AI development in the UK,” Prof Harrison said.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.