Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Conversation
The Conversation
Mark Tsagas, Senior Lecturer in Law, Cybercrime & AI Ethics, University of East London

Human oversight of AI systems may not be as effective as we think — especially when it comes to warfare

U.S. Air Force Photo / Lt. Col. Leslie Pratt

As artificial intelligence (AI) becomes more powerful – even being used in warfare – there’s an urgent need for governments, tech companies and international bodies to ensure it’s safe. And a common thread in most agreements on AI safety is a need for human oversight of the technology.

In theory, humans can operate as safeguards against misuse and potential hallucinations (where AI generates incorrect information). This could involve, for example, a human reviewing content that the technology generates (its outputs). However, there are inherent challenges to the idea of humans acting as a effective check on computer systems, as a growing body of research and several real-life examples of military use of AI demonstrate.

Across the efforts thus far to create regulations for AI, many already contain language promoting human oversight and involvement. For instance, the EU’s AI act stipulates that high-risk AI systems – for example those already in use that automatically identify people using biometric technology such as a retina scanner – need to be separately verified and confirmed by at least two humans who possess the necessary competence, training and authority.

In the military arena, the importance of human oversight was recognised by the UK government in its February 2024 response to a parliamentary report on AI in weapon systems. The report emphasises “meaningful human control” through the provision of appropriate training for the humans. It also stresses the notion of human accountability and says that decision making in actions by, for instance, armed aerial drones, cannot be shifted to machines.

This principle has largely been kept in place so far. Military drones are currently controlled by human operators and their chain of command, who are responsible for actions taken by an armed aircraft. However, AI has the potential to make drones and the computer systems they use more capable and autonomous.

This includes their target acquisition systems. In these systems, software guided by AI would select and lock onto enemy combatants, allowing the human to approve a strike on them with weapons.

While not thought to be widespread just yet, the war in Gaza appears to have demonstrated how such technology is already being used. The Israeli-Palestinian publication +972 magazine reported described a system called Lavender being used by Israel. This is reportedly an AI-based target recommendation system coupled with other automated systems, that tracks the geographical location of the identified targets.

Target acquisition

In 2017, the US military conceived of a project, named Maven, with the goal of integrating AI into weapons systems. Over the years, it has evolved into a target acquisition system. Reportedly, it has greatly increased the efficiency of the target recommendation process for weapons platforms.

In line with recommendations from academic work on AI ethics, there is a human in place to oversee the outputs of the target acquisition mechanisms as a critical part of the decision-making loop.

Nonetheless, work on the psychology of how humans work with computers raises important issues to consider. In a 2006 peer-reviewed paper, the US academic Mary Cummings summarised how humans can end up putting excessive trust in machine systems and their conclusions – a phenomenon known as automation bias.

This has the potential to interfere with the human role as a check on automated decision-making if operators are less likely to question a machine’s conclusions.

Two men sit at a bank of screens with joysticks
Drone operators should act as checks on decisions taken by AI. U.S. Air Force/Master Sgt. Steve Horton

In another study published in 1992, the researchers Batya Friedman and Peter Kahn argued that humans’ sense of moral agency can be diminished when working with computer systems, to the extent that they consider themselves unaccountable for consequences that arise. Indeed, the paper explains that people can even start to attribute a sense of agency to the computer systems themselves.

Given these tendencies, it would be prudent to consider whether placing excessive trust in computer systems, along with the potential for the erosion of humans’ sense of moral agency, could also affect target acquisition systems. After all, margins of error, while statistically small on paper, take on a horrifying dimension when we consider the potential impact on human lives.

The various resolutions, agreements and legislation on AI help provide assurances that humans will act as an important check on AI. However, it’s important to ask whether, after long periods in the role, a disconnect could occur whereby human operators start to see real people as items on a screen.

The Conversation

Mark Tsagas does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

This article was originally published on The Conversation. Read the original article.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.