I had the dubious honor of becoming the proverbial man in “man versus machine” when I faced the IBM supercomputer Deep Blue across the chessboard. When I lost our rematch in 1997, it was hailed by many as a momentous occasion for human technology, on par with the Wright brothers’ first flight or the moon landing. Of course, I didn’t feel so enthusiastic about it myself, but as I licked my wounds I realized that while the era of human versus intelligent machine was ending in chess, it was only getting started in every other aspect of our lives.
Deep Blue was conclusive proof that machines could surpass humans in complex cognitive tasks that we had long assumed were unique to our developed brains. Using chess as a metaphor, I discuss this important distinction in my recent book Deep Thinking. Deep Blue could look at 200 million positions per second, while I was limited to two or three, and yet we competed equally. The human brain is an unmatched analogy engine, finding useful patterns to leverage our lifetime of experience to make decisions.
Chess has strict rules and a clear goal—checkmate. It was ideal for the old model of smart machines: Humans program in the rules and some evaluation factors to improve the algorithms’ performance and the computer executes the code with such incredible speed that it produces superior results. But life—business, education, investing—doesn’t have such a tidy, deterministic framework. Now machine learning is pushing further across the frontier of what machines can do better than humans—and they have limitless potential.
Google Translate, for example, doesn’t know much about language at all. It doesn’t care about the rules of grammar every student of a second language must navigate. Instead, it uses astronomical amounts of real language examples to learn more like the way human infants learn a first language. It doesn’t care why a sentence is correct, and it keeps getting a tiny bit more accurate with every iteration.
As I recount in Deep Thinking, this is an old idea—even attempted with chess machines in the 1980s—but there wasn’t enough data or the ability to sift through it fast enough to make it useful. This year, the Google-backed DeepMind team, led by Demis Hassabis and their program AlphaGo, beat the world’s top player of Go, a game too complex for brute force methods. Among other techniques, AlphaGo played millions of games against itself to learn what worked best.
What is worthless with a few thousand examples can be very powerful with billions of them in our data-rich world of limitless cloud access. And while 90% accuracy isn’t good enough for a self-driving car, it’s a tremendous advance in areas like medical diagnosis, where machines are already becoming more accurate than human doctors. Think about all the fields where machines will be able to teach themselves, and us, new ways of solving problems based on what works best instead of centuries of accumulated human dogma. Machines even teach themselves better ways to learn, effectively coding themselves. This is a brave new world, one in which machines are doing things humans do not know how to teach them to do, one in which machines figure out the rules.
If this sounds ominous instead of amazing, you’ve been watching too many Hollywood movies about killer robots. In the past, our tools made us stronger and faster, capable of lifting mountains and rocketing into space. Our new tools will make us smarter, enabling us to better understand our world and ourselves. Deep Blue didn’t understand chess, or even know it was playing chess, but it played it very well. We may not comprehend all the rules our machines invent, but we will benefit from them nonetheless. Our challenge is to keep thinking up new directions for artificial intelligence to explore—and that’s a job that can never be done by a machine.
Garry Kasparov is the author of Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins, which was published in 2017.