Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Atlantic (Deprecated 2020-08-31)
The Atlantic (Deprecated 2020-08-31)
Technology
Alexis C. Madrigal

If Buddhist Monks Trained AI

The Harvard psychologist Joshua Greene is an expert in “trolleyology,” the self-effacing way he describes his research into the manifold variations on the “trolley problem.” The basic form of this problem is simple: There’s a trolley barreling towards five people, who will die if they’re hit. But you could switch the trolley onto another track on which only a single person stands. Should you do it?

From this simple test of moral intuition, researchers like Greene have created an endless set of variations. By varying the conditions ever so slightly, the trolley problem can serve as an empirical probe of human minds and communities (though not everyone agrees).

For example, consider the footbridge variation: You’re standing on a footbridge above the trolley tracks with a very large person, who, if you push him or her, can stop the trolley from killing the five people.  Though the number of lives saved is the same, it turns out that far more people would throw the switch than push the person.  

But this is not quite a universal result. During a session Wednesday at the Aspen Ideas Festival, which is co-hosted by the Aspen Institute and The Atlantic, Greene joked that only two populations were likely to say that it was okay to push the person on the tracks: psychopaths and economists.

Later in his talk, he returned to this, however, through the work of Xin Xiang, an undergraduate researcher who wrote a prize-winning thesis in his lab titled “Would the Buddha Push the Man of the Footbridge? Systematic Variations in the Moral Judgment and Punishment Tendencies of the Han Chinese, Tibetans, and Americans.

Xiang administered the footbridge variation to practicing Buddhist monks near the city of Lhasa and compared their answers to Han Chinese and American populations.  “The [monks] were overwhelmingly more likely to say it was okay to push the guy off the footbridge,” Greene said.

He noted that their results were similar to psychopaths—clinically defined— and people with damage to a specific part of the brain called the ventral medial prefrontal cortex.

“But I think the Buddhist monks were doing something very different,” Greene said. “When they gave that response, they said, ‘Of course, killing somebody is a terrible thing to do, but if your intention is pure and you are really doing it for the greater good, and you’re not doing it for yourself or your family, then that could be justified.’”

For Greene, the common intuition that it’s okay to use the switch but not to push the person is a kind of “bug” in our biologically evolved moral systems.

“So you might look at the footbridge trolley case and say, okay, pushing the guy off the bridge, that’s clearly wrong. That violates someone’s rights. You’re using them as a trolley stopper, et cetera. But the switch case that’s fine,” he said. “And then I come along and tell you, look, a large part of what you’re responding to is pushing with your hands versus hitting a switch. Do you think that’s morally important?”

He waited a beat, then continued.

“If a friend was on a footbridge and called you and said, ‘Hey, there’s a trolley coming. I might be able to save five lives but I’m going to end up killing somebody! What should I do?’ Would you say, ‘Well, that depends. Will you be pushing with your hands or using  a switch?’”

What people should strive for, in Greene’s estimation, is moral consistency that doesn’t flop around based on particulars that shouldn’t determine whether people live or die.

Greene tied his work about moral intuitions to the current crop of artificial-intelligence software. Even if they don’t or won’t encounter problems as simplified as the trolley and footbridge examples, AI systems must embed some kind of ethical framework. Even if they don’t lay out specific rules for when to take certain behaviors, they must be trained with some kind of ethical sense.

And, in fact, Greene said that he’s witnessed a surge in people talking about trolleyology because of the imminent appearance of self-driving cars on human-made roads. Autonomous vehicles do seem like they will be faced with some variations on the trolley problem, though Greene said the most likely would be whether the cars should ever sacrifice their occupants to save more lives on the road.

Again, in that instance, people don’t hold consistent views. They say, in general, that cars should be utilitarian and save the most lives. But when it comes to their specific car, their feelings flip.

All these toy problems add up to a (still incomplete) portrait of human moral intuitions, which are being forced into explicit shapes by the necessity of training robots. Which is totally bizarre.

And the big question Greene wants us to ask ourselves before building these systems is: Do we know which parts of our moral intuition are features and which are bugs?

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.