October 26, 2024 marks the 40th anniversary of director James Cameron’s science fiction classic, The Terminator – a film that popularised society’s fear of machines that can’t be reasoned with, and that “absolutely will not stop … until you are dead”, as one character memorably puts it.
The plot concerns a super-intelligent AI system called Skynet which has taken over the world by initiating nuclear war. Amid the resulting devastation, human survivors stage a successful fightback under the leadership of the charismatic John Connor.
In response, Skynet sends a cyborg assassin (played by Arnold Schwarzenegger) back in time to 1984 – before Connor’s birth – to kill his future mother, Sarah. Such is John Connor’s importance to the war that Skynet banks on erasing him from history to preserve its existence.
Today, public interest in artificial intelligence has arguably never been greater. The companies developing AI typically promise their technologies will perform tasks faster and more accurately than people. They claim AI can spot patterns in data that are not obvious, enhancing human decision-making. There is a widespread perception that AI is poised to transform everything from warfare to the economy.
Immediate risks include introducing biases into algorithms for screening job applications and the threat of generative AI displacing humans from certain types of work, such as software programming.
But it is the existential danger that often dominates public discussion – and the six Terminator films have exerted an outsize influence on how these arguments are framed. Indeed, according to some, the films’ portrayal of the threat posed by AI-controlled machines distracts from the substantial benefits offered by the technology.
The Terminator was not the first film to tackle AI’s potential dangers. There are parallels between Skynet and the HAL 9000 supercomputer in Stanley Kubrick’s 1968 film, 2001: A Space Odyssey.
It also draws from Mary Shelley’s 1818 novel, Frankenstein, and Karel Čapek’s 1921 play, R.U.R.. Both stories concern inventors losing control over their creations.
On release, it was described in a review by the New York Times as a “B-movie with flair”. In the intervening years, it has been recognised as one of the greatest science fiction movies of all time. At the box office, it made more than 12 times its modest budget of US$6.4 million (£4.9 million at today’s exchange rate).
What was arguably most novel about The Terminator is how it re-imagined longstanding fears of a machine uprising through the cultural prism of 1980s America. Much like the 1983 film WarGames, where a teenager nearly triggers World War 3 by hacking into a military supercomputer, Skynet highlights cold war fears of nuclear annihilation coupled with anxiety about rapid technological change.
Read more: Science fiction helps us deal with science fact: a lesson from Terminator's killer robots
Forty years on, Elon Musk is among the technology leaders who have helped keep a focus on the supposed existential risk of AI to humanity. The owner of X (formerly Twitter) has repeatedly referenced the Terminator franchise while expressing concerns about the hypothetical development of superintelligent AI.
But such comparisons often irritate the technology’s advocates. As the former UK technology minister Paul Scully said at a London conference in 2023: “If you’re only talking about the end of humanity because of some rogue, Terminator-style scenario, you’re going to miss out on all of the good that AI [can do].”
That’s not to say there aren’t genuine concerns about military uses of AI – ones that may even seem to parallel the film franchise.
AI-controlled weapons systems
To the relief of many, US officials have said that AI will never take a decision on deploying nuclear weapons. But combining AI with autonomous weapons systems is a possibility.
These weapons have existed for decades and don’t necessarily require AI. Once activated, they can select and attack targets without being directly operated by a human. In 2016, US Air Force general Paul Selva coined the term “Terminator conundrum” to describe the ethical and legal challenges posed by these weapons.
Stuart Russell, a leading UK computer scientist, has argued for a ban on all lethal, fully autonomous weapons, including those with AI. The main risk, he argues, is not from a sentient Skynet-style system going rogue, but how well autonomous weapons might follow our instructions, killing with superhuman accuracy.
Russell envisages a scenario where tiny quadcopters equipped with AI and explosive charges could be mass-produced. These “slaughterbots” could then be deployed in swarms as “cheap, selective weapons of mass destruction”.
Countries including the US specify the need for human operators to “exercise appropriate levels of human judgment over the use of force” when operating autonomous weapon systems. In some instances, operators can visually verify targets before authorising strikes, and can “wave off” attacks if situations change.
AI is already being used to support military targeting. According to some, it’s even a responsible use of the technology, since it could reduce collateral damage. This idea evokes Schwarzenegger’s role reversal as the benevolent “machine guardian” in the original film’s sequel, Terminator 2: Judgment Day.
However, AI could also undermine the role human drone operators play in challenging recommendations by machines. Some researchers think that humans have a tendency to trust whatever computers say.
‘Loitering munitions’
Militaries engaged in conflicts are increasingly making use of small, cheap aerial drones that can detect and crash into targets. These “loitering munitions” (so named because they are designed to hover over a battlefield) feature varying degrees of autonomy.
As I’ve argued in research co-authored with security researcher Ingvild Bode, the dynamics of the Ukraine war and other recent conflicts in which these munitions have been widely used raises concerns about the quality of control exerted by human operators.
Ground-based military robots armed with weapons and designed for use on the battlefield might call to mind the relentless Terminators, and weaponised aerial drones may, in time, come to resemble the franchise’s airborne “hunter-killers”. But these technologies don’t hate us as Skynet does, and neither are they “super-intelligent”.
However, it’s crucially important that human operators continue to exercise agency and meaningful control over machine systems.
Arguably, The Terminator’s greatest legacy has been to distort how we collectively think and speak about AI. This matters now more than ever, because of how central these technologies have become to the strategic competition for global power and influence between the US, China and Russia.
The entire international community, from superpowers such as China and the US to smaller countries, needs to find the political will to cooperate – and to manage the ethical and legal challenges posed by the military applications of AI during this time of geopolitical upheaval. How nations navigate these challenges will determine whether we can avoid the dystopian future so vividly imagined in The Terminator – even if we don’t see time travelling cyborgs any time soon.
Tom F.A Watts receives funding from the Leverhulme Trust Early Career Research Fellowship scheme.
This article was originally published on The Conversation. Read the original article.