Get all your news in one place.
100’s of premium titles.
One app.
Start reading
LiveScience
LiveScience
Tia Ghose

Google DeepMind can beat humans at table tennis

A GIF with a series of frames showing a robotic arm playing table tennis against a human.

Google's DeepMind can control a robotic arm to beat mere mortals at table tennis, a new study reports. But Fan Zhendong, the 2024 gold medalist for individual and team men's table tennis, can rest easy: The artificial intelligence (AI)-powered robot could only beat mediocre players, and only some of the time, according to the study, which was published Aug. 7 to the preprint database arXiv and has not been peer-reviewed.

Robots can now cook, clean and perform acrobatics, but they struggle to quickly respond to real-world environmental information.

"Achieving human-level performance in terms of accuracy, speed and generality still remains a grand challenge in many domains," the researchers wrote in the study.

Related: 32 times artificial intelligence got it catastrophically wrong

To overcome this limitation, the researchers combined an industrial robot arm with a customized version of DeepMind's ultrapowerful learning algorithm. DeepMind uses neural networks, a layered architecture that mimics how information is processed in the human brain, to gradually learn new information. So far, it has beaten the world's best Go player, predicted the structure of every protein in the body, cracked decades-old mathematics problems and more.

The system was trained to master specific aspects of the game — for instance, learning the rules, creating top spin, delivering forehand serves or using backhand targeting — training on real-world and simulated data in sophisticated algorithms. As the AI learned, the researchers also collected data on its strengths, weaknesses and limitations. Then, they fed this information back to the AI program, thus giving DeepMind's unnamed agent a realistic impression of its abilities. The system then picked which skills or strategies to use in the moment, taking into account its opponent's strengths and weaknesses, just like a human table-tennis player might.

Then, they pitted their AI-controlled robot against 29 humans. DeepMind's robot arm beat all of the beginners and about 55% of the intermediate players, but it got trounced by advanced players. In an international rating system, it would be a solid amateur player.

DeepMind's robot arm did have some systematic weaknesses, however. For example, it struggled with high balls and, like many of us, found backhand shots more challenging than forehand ones.

Most of the human players seemed to like playing against the system. "Across all skill groups and win rates, players agreed that playing with the robot was 'fun' and 'engaging,' the researchers wrote in the study.

The new approach could be useful for a wide range of applications that call for quick responses in dynamic physical environments, the researchers said.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.