Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Financial Times
Financial Times
Business
John Thornhill

AI's rapid advance sparks call for a code for robots

Maximum Overdrive has entered movie legend as one of the worst films ever made. The 1986 science fiction, horror and comedy film imagined a world in which inanimate objects, including bulldozers, chainsaws and electric hairdryers, came to life and started massacring people. Even Stephen King, the bestselling author who wrote and directed the film, described it as a "moron movie".

But real life came tragically close to imitating fiction during the filming of Maximum Overdrive when a radio-controlled lawnmower ran into the set and badly wounded the director of photography, who lost an eye. He sued Mr King and 17 others for $18m for unsafe working practices before eventually settling out of court.

In some respects, the history of this film exemplifies much of the popular debate about automation, robots and artificial intelligence. While we seem to panic about the existential threat such technologies may pose to mankind in the distant future, we are in danger of overlooking some of the more immediate concerns about how to manage our mechanical creations.

Who should take moral, ethical and legal responsibility for the actions of increasingly ubiquitous robots? Should it be the manufacturers, programmers or users? In the longer run, when they acquire higher powers of cognition and perhaps consciousness, should it even be the robots themselves?

...

In his forthcoming book Android Dreams, Toby Walsh, a professor of artificial intelligence at the University of New South Wales in Australia, argues that the development of thinking machines is as bold and ambitious an adventure as mankind has ever attempted. "Like the Copernican revolution, it will fundamentally change how we see ourselves in the universe," he writes.

Such issues are becoming all the more urgent given the explosive growth in the number of drones, driverless cars and medical, educational and domestic robots whizzing around our skies, streets and homes. While this robot revolution promises to improve the human condition, it also threatens to unleash a disruptive economic force.

Ryan Calo, a law professor at the University of Washington, says that we tend to talk about robots as if they are a future technology, ignoring the fact that we have already been living with them for several decades.

"If you want to envisage the future in the 1920s, 1940s, 1980s, or in 2017, then you think of robots. But the reality is that robots have been in our societies since the 1950s," he says.

In a paper called Robots in American Law, Mr Calo studied nine legal cases over the past six decades involving robots and found that much of the judicial reasoning was based on poor, often outdated, views of technology. "Robots confront courts with unique legal challenges that judges are not well positioned to address," he concluded.

The cases mostly revolved around whether robots could be considered surrogates for people: if they should be deemed "animate" for the purposes of import tariffs; whether they could "perform" as entertainers in a concert hall; and whether an unmanned robot submarine could "possess" a wreck for the purposes of salvage claims.

Mr Calo found that judges had a very strong mental model of robots as programmable tools or discretion-less machines. But that view is looking increasingly anachronistic as machines assume embodied, sometimes humanoid, form and demonstrate what roboticists call "emergent behaviour".

"Emergence is a property that robots will behave in ways that the system cannot anticipate," says Mr Calo. "It is not autonomy in a philosophical sense. But it raises the prospect of having victims without perpetrators."

For example, some high-speed trading algorithms are "learning" from patterns in financial markets and responding in ways that their creators cannot predict, perhaps not even understand. Driverless cars are being developed to respond to events in real time (one hopes) rather than preprogrammed to anticipate every situation on the road.

...

This month, 116 founders of robotics and AI companies signed a petition calling for the outright banning of killer robots - known as lethal autonomous weapons systems, or Laws. The use of such weapons systems crossed a moral red line, they claim. Only humans should be permitted to kill humans.

"We should not lose sight of the fact that, unlike other potential manifestations of AI that still remain in the realm of science fiction, autonomous weapons systems are on the cusp of development right now," says Ryan Gariepy, founder of Ontario-based Clearpath Robotics. "The development of lethal autonomous weapons systems is unwise, unethical and should be banned on an international scale."

However, drawing neat lines between humans and robots in this fast-evolving world is tricky. The latest technologies are blurring the line between people and instruments, making robots agentic, if not necessarily agents. Although today's robots would fail the legal test of mens rea (having intent to commit an offence), they still appear "responsible" for their actions in a layman's sense of the term.

A second big development in robotics, which muddies the picture still further, is the embodiment of AI in physical, sometimes humanoid, form in machines designed to engage directly with people.

Henny Admoni, an assistant professor at the Robotics Institute at Carnegie Mellon University, says that historically most robots have operated separately from humans, doing dull, dirty and dangerous work mostly in industrial settings. But that is now changing fast with the arrival of chat bots, drones and domestic robots.

"Over the past 10 years we have seen a rise of robots intended to engage directly with people," she says.

...

That has spurred a fast-developing academic field known as human-robot interaction, or HRI. Robotics departments of both universities and companies have been hiring sociologists, anthropologists, lawyers, philosophers and ethicists to inform how these interactions should evolve.

"In a legal and moral sense robots are machines that are programmed by people and designed by people," says Ms Admoni. "But we do want robots to act autonomously. We do want robots that can handle new situations. Ethics is a very recent addition to the conversation because robots can do things independently now."

Some of the most striking humanoid robots have been built by David Hanson, founder of Hong Kong-based Hanson Robotics. His best known creation is Sophia, a spookily lifelike robot that appeared on The Tonight Show with Jimmy Fallon in April.

Mr Hanson says AI systems are becoming good at understanding verbal communication as a result of natural language processing technologies. But he argues that robots should also learn non-verbal means of communication, such as facial expressions and hand gestures. We need them to understand human behaviours, cultures and values, too. The best way to do that is to enable robots to learn, like babies do, by living with and interacting with humans.

By developing "bio-inspired intelligent algorithms" and allowing them to absorb rich social data, via sophisticated sensors, we can create smarter and faster robots, Mr Hanson says. That will inexorably lead to the point where the technology will be "literally alive, self-sufficient, emergent, feeling, aware".

He adds: "I want robots to learn to love and what it means to be loved and not just love in the small sense. Yes, we want robots capable of friendship and familial love, of this kind of bonding.

"However, we also want robots to love in a bigger sense, in the sense of the Greek word agape, which means higher love, to learn to value information, social relationships, humanity."

Mr Hanson argues a profound shift will happen when machines begin to understand the consequences of their actions and invent solutions to their everyday challenges. "When machines can reason this way then they can begin to perform acts of moral imagination. And this is somewhat speculative but I believe that it's coming within our lifetimes," he says.

If such "moral machines" can truly be created then that raises a whole host of new questions and challenges. Would the robot or its owner possess the rights to its data? Could robots be said to have their own legal identity? Should they, as Mr Hanson argues, even be able to earn rights?

Mr Hanson is on the outer edges of the robot debate and his notions seem fantastical today, but there are good reasons for beginning to focus on such issues. For different legal reasons, all US corporations and some sacred Indian rivers have already been granted the status of personhood. The UK has also given additional legal protection to one invertebrate, the octopus, because it has a higher form of sentience. Will future robots be so different?

Murray Shanahan, professor of cognitive robotics at Imperial College in London and senior research scientist at Google DeepMind, says that we have already reached the point when we should take responsibility for some of our mechanical creations, just as we do for great works of art.

"We have a moral responsibility not to destroy the 'Mona Lisa' because it is a remarkable artefact, or an archive or any object that has immense emotional attachment," he says.

But he argues there are great dangers in anthropomorphising systems of intelligence if that leads to misinterpretations and misunderstandings of the underlying technology. Manufacturers should not try to trick users into believing that robots have more capabilities than they possess. "People should not be fooled into thinking that robots are smarter than they actually are," he says.

Mr Shanahan argues that it is important to distinguish between cognition and consciousness in determining our responsibilities towards machines. "At the moment I think it is completely inappropriate to talk about robot rights. We do not have any moral responsibilities in that respect. But I am not saying that it will never be appropriate.

"I agree that robots could one day have a consciousness. But they would first have to have the ability to play, to build things and take a chocolate biscuit out of a jar on a shelf," he says.

For the moment, few politicians appear interested in such debates. But a grassroots movement of academics and entrepreneurs is pushing these issues higher up the agenda.

In the US, some academics, such as Mr Calo, have been arguing for the creation of a Federal Robotics Commission to examine the moral and legal issues surrounding the use of smart machines. Mr Calo says this idea is beginning to gain some limited traction in Congress, if not in the Trump administration.

This year, members of the European Parliament passed a resolution calling on the European Commission to establish a similar expert agency for robotics and AI and draw up EU-wide rules. In particular, MEPs urged the commission to focus on safety and privacy issues and consider giving robots a form of "electronic personhood".

Some powerful West Coast entrepreneurs also seem intent on generating a debate. This month, Elon Musk, the tech entrepreneur behind Tesla Motors and SpaceX who backed the ban on lethal autonomous weapons systems, called for broader regulation.

"Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that's a danger to the public is regulated. AI should be, too," he tweeted.

Copyright The Financial Times Limited 2017

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.