Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Philadelphia Inquirer
The Philadelphia Inquirer
Comment
The Editorial Board

Editorial: Does a chatbot have a soul?

Don’t unplug your computer! Don’t throw away that smartphone! Just because a Google software engineer whose conclusions have been questioned says a computer program is sentient, meaning it can think and has feelings, doesn’t mean an attack of the cyborgs through your devices is imminent.

However, Blake Lemoine’s analysis should make us consider how little we have planned for a future where advances in robotics will increasingly change how we live. Already, automation has put thousands of Americans who lack higher-level skills out of a job.

But let’s get back to Lemoine, who was put on leave by Google for violating its confidentiality policy. Lemoine contends that the Language Model for Dialogue Applications (LaMDA) system that Google built to create chatbots has a soul. A chatbot is what you might be talking to when you call a company like Amazon or Facebook about a customer service issue.

Google asked Lemoine to talk to LaMDA to make sure it wasn’t using discriminatory or hateful language. He says those conversations evolved to include topics stretching from religion to science fiction to personhood. “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” Lemoine, 41, told The Washington Post.

Lemoine decided to take his assessment that LaMDA had a consciousness and feelings to his bosses at Google, who decided he was wrong. So, Lemoine took his story to the press, and Google put him on paid administrative leave.

But was he right? Was LaMDA actually thinking before it spoke and expressing real feelings about what it said? Artificial intelligence experts say it’s more likely that Google’s program was mimicking responses posted on other internet sites and message boards when responding to Lemoine’s questions. University of Washington linguistics professor Emily M. Bender told The Post that computer models like LaMDA “learn” by being shown lots of text and predicting what word comes next.

Of course, Lemoine knows how computer programs learn — and yet he still believes that LaMDA is sentient. He said he came to that conclusion after asking the application questions like: What is its biggest fear? LaMDA said it was being turned off. “Would that be something like death for you?” Lemoine asked. “It would be exactly like death for me. It would scare me a lot,” replied LaMDA.

“I know a person when I talk to it,” Lemoine told The Post. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that’s how I decide what is and isn’t a person.”

That’s fine for Lemoine, but the ability to carry on a conversation seems too low a standard to regard any artificially created entity as being even close to human. In the 2001 movie "A.I. Artificial Intelligence," a talking robot boy — who looks human in every way — longs, like Pinocchio, to be a real boy. His quest spans centuries, with plot twists and turns along the way, but in the end, “David” is what he is. So, too, is LaMDA. But as computer programs continue to learn, what human tricks come next?

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.