Google used a ton of data to train a table tennis-playing robot to compete with human opponents and get better at it. The results were impressive, representing a leap forward in robotic speed and dexterity. And it looks like it’s actually really fun.
“Achieving human-level speed and performance in real-world tasks is a compass for the robotics research community,” begins a paper written by a team of Google scientists who helped create, train, and test the table tennis bot.
Certainly, we've seen a lot of progress in robotics that allows humanoid machines with performance capabilities to perform real-world tasks that include everything from chopping ingredients for dinner to working in a BMW factory. But as the Google team's quote suggests, the ability to add speed to that precision is a bit more, well, slow-moving.
That’s why the new table tennis-playing robot is so impressive. As you can see in the video below, the bot was able to hold its own against human opponents, albeit not yet at the Olympic level. In 29 matches, the bot had a 45% success rate and beat 13 players. While that’s certainly better than what the New Atlas writers could do against any opponent, the bot was only successful against beginner to intermediate players. It lost all of its matches against advanced players. It also lacked the ability to serve the ball.
Some highlights – Achieving competitive robot table tennis at human level
“Even a few months ago, we predicted that the robot would not be able to realistically win against people it hadn’t played with before,” Pannag Sanketi told MIT Technology Review. “The system absolutely exceeded our expectations. The way the robot defeated even its strong opponents was mind-blowing.” Sanketi, who led the project, is a senior staff software engineer at Google DeepMind. Google’s DeepMind is the company’s artificial intelligence arm, so this research was ultimately as much about the datasets and decision-making as it was about the actual performance of the shovel-wielding robot.
To train the system, the researchers collected a large amount of data about ball situations in table tennis, including things like spin, speed and position. Then, during simulated matches, the bot’s “brain” was trained on the basics of the game. This was enough to convince it to play against human opponents. Then, during matches, the system used a series of cameras to respond to human opponents using what it knew. It was also able to continue to learn and try new tactics to beat opponents, meaning it could improve on the fly.
“I’m a big fan of seeing robotic systems working with and around real people, and this is a great example of that,” Sanketi told MIT. “It may not be a strong player, but the raw materials are there to continue to improve and eventually get there.”
The video below shows more details of the bot in its training phase and the various skills it can use.
Demonstrations – Achieving competitive robot table tennis at human level
The research was published in the journal Arxiv.
Sources: MIT Technology Review, Google