Google’s Artificial Intelligence Project May Start Playing Video Games
DeepMind, an artificial intelligence unit at Google, is on a mission to bring artificial intelligence to the next level by mastering some of the most complex games in the world.
This week, DeepMind creation AlphaGo scored two consecutive victories against a world champion player of the ancient Chinese game called Go. Next up for Google domination? DeepMind co-founder Demis Hassabis says StarCraft is a potentially interesting challenge, since players have to make decisions with only partial information.
But Hassabis told The Verge that beating challenging games isn’t the be-all-end-all of DeepMind. “We’re only interested in things to the extent that they are on the main track of our research program. So the aim of DeepMind is not just to beat games, fun and exciting though that is,” he said. “It’s to the extent that they’re useful as a testbed, a platform for trying to write our algorithmic ideas and testing out how far they scale and how well they do and it’s just a very efficient way of doing that.”
Winning Go twice in a row against a master considered the best of the century is already a stunning accomplishment in the artificial intelligence world. Hassabis described Go to The Verge as a “holy grail” for AI research, and a game that is impenetrable to brute mathematical force in the way that checkers or tic-tac-toe might be.