For several years, computers have made short work of human champions in Go and chess. Now, artificial intelligence researchers are attempting an improbable path even closer to human capability: Pictionary, a guessing game requiring not strategy but the hard-to-duplicate quality of common sense.
Why it matters: The effort to play with humans — rather than against them — is a step toward an optimistic future of work in which AI cooperates with people to complete tasks, rather than wiping out workers in large numbers.
Driving the news: Researchers at Allen Institute for Artificial Intelligence have developed an AI program that can play both sides of Pictionary, a game in which one player draws a picture to represent a word or phrase for the other player to guess.
- The big picture: Pictionary only seems straightforward. Like so many tasks that come naturally to children, it's a challenge for even the most advanced AI.
- That's because it requires a grasp of nebulous concepts that are common sense to humans, but that no one has figured out how to easily teach computers.
- Try the game here.
It's impressively good, but doesn't quite feel like a grandmaster yet. But if the Allen researchers can make the leap from where they are now to mastering common sense, they will have accomplished a lot. It is hard to define precisely, but it's an underestimated and central human quality, essential for communication.
- An important point: It's also a key stepping stone to machine intelligence that matches or surpasses human capabilities.
What they did: The Allen researchers' AI watched humans play 100,000 games. Meanwhile, it was also taught how to find words that have characteristics in common — like bread, fruit, and food. Listen to Oren Etzioni, Allen's CEO:
I had a couple of experts play the game:
- Stefaan Verhulst, who researches human–computer interaction at New York University, told me that the AI reasons its way to a guess like a human would. That's something that would be useful, say, where a computer is missing important information and has to fill in the blanks, he said.
- But, but, but: Dileep George, co-founder of the AI company Vicarious, said the player is still far from exhibiting genuine common sense. He likened it to the auto-complete feature on smartphones — a system that can guess what a user intends to type, but doesn't have a deep understanding of the world.
Watching the computer tells us something about people, too. Since the AI trained by watching humans play, it has absorbed some human biases.
- In one game, I was assigned the phrase "couple buying a car." I drew two men, a dollar bill, and a car, but the system couldn't guess the phrase. I added a wedding ring between the men, but it still didn't understand.
- When I changed one of the men to a women, the AI immediately guessed correctly.
- This type of mistake — where AI reflects the narrow point of view of the humans it learns from — is a common problem. When AI systems are in charge of hiring, doling out loans, or evaluating parole applications, the danger of creeping bias is far greater.
My thought bubble: The AI is better at guessing than drawing. If a player asks for a new drawing when he or she can't immediately guess the phrase, the AI can get stuck.
- This is a difficult situation, because the computer must "proactively adapt if [its] human partner cannot guess," says Ani Kembhavi, a researcher at the Allen Institute. It must know what to add that will get the person across the finish line.
- Figuring out how to fill in the gaps when a person isn't understanding is key to human–machine communication.