Game over, humans.
Add it to the list of things computers can do faster than people: master ancient Chinese board games.
On Wednesday, researchers from Alphabet-owned Google DeepMind announced that they’ve done something previously thought to be impossible for computers: defeat a world-class Go player.
The London-based artificial intelligence firm published their news in the science journal Nature. The achievement is noteworthy because it not only requires massive computing power, but also the ability to learn and store information like a human.
Get Data Sheet, Fortune’s technology newsletter.
Go is a two-player board game where players place and capture black and white stones, trying to fill the space with their color. At first glance, the game looks like a more sprawling version of checkers, but with more possible moves than atoms in the universe, it can’t be conquered by weighing all the various possibilities. To play this game well, players are required to think and strategize in real-time—they need to sense the best moves.
Toby Manning of the British Go Association explains the game here (skip to 1:35):
Since Go players rely on creativity and intuition to succeed, a computer’s ability to plot out every possible move gave it no great advantage. And while computers started beating human chess-masters in 1997, what they are doing here is more lifelike.
As DeepMind’s David Silver explains it, the AI is not being a massive search engine, like Google, cataloging every move. Instead, it is thinking a few moves ahead and making decisions based on what it has learned from previous games, a much more human, less fully-calculated way to act.
Google invited Fan Hui, the reigning European champion, to compete against their system, AlphaGo. In the first round, Hui went easy on the AI, looking to give the computer a fighting chance. But after he started getting defeated by AlphaGo, Hui got fierce. Final score? AlphaGo beat Hui 5-0.
“It’s very hard for me, but it’s a reality,” Hui told Nature. Unlike computers, he said, sometimes humans can really mess up.
Google DeepMind isn’t just developing this AI to play games. In the future, the company hopes to be able to use the new computing power in fields like healthcare for the very human tasks of diagnosing and coming up with treatment plans.
But it is also continuing to train the system to play against more humans. Next up, South Korea’s Lee Sedol, an international Go champion.
Meanwhile, Facebook has also been racing to master the art of Go, so there could be an all-AI match, someday in the future. But so far, the social network only has one researcher dedicated to the task, and their six-month old algorithms haven’t reached pro status… yet.