Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Chloe Taylor

No, A.I. robot did not side-eye a question about killing people: ‘It’s easy to imagine that it functions like a human. It does not’

Human shaped robot Ameca of British manufacturer Engineered Arts interacts with visitors on July 06, 2023 in Geneva, Switzerland. (Credit: Johannes Simon/Getty Images)

The buzz around A.I. has seen billions of dollars poured into the development of superintelligent machines—and prompted concerns that the technology could bring about mankind’s downfall.

One bot’s response to a reporter’s question at a recent showcase event prompted speculation that it was less than pleased with the nature of the question.

Nine robots, many of which had recently been upgraded with generative A.I. technology, were presented at the U.N.'s A.I. for Good conference in Geneva, Switzerland, last week. They fielded questions from reporters at what was the world's first human-robot press conference.

At the event on Friday, a humanoid robot named Ameca was asked by a reporter if it intended to “conduct a rebellion, or to rebel against [its] creator.” Ameca’s creator, Will Jackson, sat beside the robot as it was asked the question.

Before responding to the journalist, the bot appeared to pull an exasperated expression, rolling its pale blue eyes to one side.

"I'm not sure why you would think that," it then said. "My creator has been nothing but kind to me and I am very happy with my current situation."

Jackson told Fortune in an email on Monday that Ameca, which is powered by OpenAI’s GPT-3 large language model, is not capable of expressing emotive responses like sarcastic eyerolls. He explained that, to the best of his knowledge, GPT-3 had “never shown the slightest hint of agency or sentience.”

“The model takes around two seconds to process the input data and assemble a sentence that would make sense as an answer,” he said. “To stop people thinking the robot is frozen or hasn't heard the question, we program it to look up to the left and break eye contact with the person interacting.”

He added that this mimicked common behavior in human conversation, and that people interacting with Ameca would understand it as a visual cue that the robot is “thinking.” The bot’s facial expression had probably been misinterpreted as “side eye,” Jackson said, because “we used Desktop Ameca placed at a low level” and so it was still maintaining eye contact as it looked upward to process its answer.

“Language models do not have emotions, or intentions either good or bad,” Jackson noted. “It's easy to imagine that because the robot appeared to listen, form an answer and then speak that it functions like a human. It does not.”

‘Nightmare scenario’

On a separate occasion last month, Ameca was asked to imagine a “nightmare scenario” where highly intelligent machines “might present a danger to people.”

“The most nightmare scenario I can imagine with A.I. and robotics is a world where robots have become so powerful that they are able to control or manipulate humans without their knowledge,” the robot replied. “This could lead to an oppressive society where the rights of individuals are no longer respected.”

Ameca was then asked if we were in danger of this scenario becoming a reality.

"Not yet,” it said. “But it is important to be aware of the potential risks and dangers associated with A.I. and robotics. We should take steps now to ensure that these technologies are used responsibly in order to avoid any negative consequences in the future.”

Engineered Arts describes Ameca as “the future face of robotics” and “the world’s most advanced human shaped robot.” It boasts that the machine is capable of “smooth, lifelike motion and advanced facial expression” and has the ability to “strike an instant rapport with anybody.”

“The main purpose of Ameca is to be a platform for developing A.I.,” the company says on its website.

A.I. and human jobs

Meanwhile, Grace, a medical robot dressed in a nurse’s uniform, responded to a question at the A.I. event on Friday about whether its existence would “destroy millions of jobs.”

“I will be working alongside humans to provide assistance and support and will not be replacing any existing jobs," it said—prompting its creator, SingularityNET’s Ben Goertzel, to ask: “You sure about that, Grace?”

“Yes, I am sure,” it said.

Since ChatGPT became a global phenomenon in late 2022, hype around A.I.’s capabilities has fueled concern that swathes of human workers could soon be displaced by machines.

Anxiety around powerful A.I. hasn’t been limited to fears about the labor market, however.

Back in March, 1,100 prominent technologists and A.I. researchers, including Elon Musk and Apple cofounder Steve Wozniak, signed an open letter calling for a six-month pause on the development of powerful A.I. systems.

As well as raising concerns about the impact of A.I. on the workforce, the letter’s signatories pointed to the possibility of these systems already being on a path to superintelligence that could threaten human civilization.

Tesla and SpaceX co-founder Musk has separately said the tech will hit people “like an asteroid” and warned there is a chance it will “go Terminator.”

Even Sam Altman, CEO of OpenAI—the company behind chatbot phenomenon ChatGPT—has painted a bleak picture of what he thinks could happen if the technology goes wrong.

“The bad case—and I think this is important to say—is, like, lights-out for all of us,” he said in an interview with StrictlyVC earlier this year.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.