ON FEB. 24, 1956, ARTHUR LEE SAMUEL played a game of checkers on television. His opponent: a 36-bit vacuum tube computer made by International Business Machines.
Samuel, then a 55-year-old researcher at IBM, had painstakingly assigned each of the 64 squares on the checkerboard a different set of machine-word identifiers, and done the same with each piece on each square. Then he programmed his IBM 701 computer to think: that is, to consider a few possible checkers moves “by evaluating the resulting board positions much as a human player might do,” Samuel would later write.
“‘Looking ahead’ is prepared for by computing all possible next moves, starting with a given board position,” he explained. For each of those potential moves, the computer would redraw the board in its electronic brain with “the old board positions being saved to facilitate a return to the starting point,” and then the process would repeat. When the indicated move didn’t result in a better outcome, the IBM 701 would try another one, and so forth, until the machine was successful.
In short, Samuel had programmed the “digital computer to behave in a way which, if done by human beings or animals, would be described as involving the process of learning,” he said.
Before the televised match, IBM president Tom Watson—the namesake of IBM’s current-model thinking machine—predicted his company’s shares would soar after the demonstration. They did just that.
What followed was a heady period of excitement during the 1950s and 1960s, when computer scientists sketched out dreams and designs of advanced problem-solving machines that, in many cases, mirrored the robotic creations of science fiction novels.
What followed that, however, was a long reality check—a period that, it turns out, was far less “cyborg” than incremental science. Over the next half-century, many of the most ambitious fantasies of artificial intelligence were often met with prosaic, real-world limitations. The progress was genuine, of course—from mathematical breakthroughs to major advances in computing power. But still the perceived failures led to long fallow periods that many in the field dubbed “A.I. winters.”
Well, it’s spring again in the realm of A.I.—and the ambition (and hype) are blooming big-time. Big companies are now feverishly gobbling up startups—firms that are teaching machines to master the idiosyncrasies of human conversation, to expertly recognize the world around them, and to instantly scan terabytes of data to discover patterns that no mere mortal could recognize.
Yes, a lot has changed, as we explore in our A.I. Special Report this issue. But as writers Jonathan Vanian and Vauhini Vara show, it’s time for another reality check. As the capacity of these algorithms grows exponentially, so do the questions about their potential biases and their risky assumptions. It’s an ongoing lesson that requires our own attempt at deep learning. The success of this latest stage of A.I. development may depend on it.
Which brings me to our cover story. As “thoughtful” as some machines are becoming, there are none (yet) that can mimic the spark of creative energy at the heart of entrepreneurship. Creating a company and building it may be one of the most human things that humans do.
Which is one of many reasons I find Adam Lashinsky’s feature story on Jack Ma and Huateng “Pony” Ma—the mightily ambitious founders, respectively, of Alibaba and Tencent—such an intoxicating read. In “Ma vs. Ma,” Adam expertly depicts what may be the biggest corporate battle in China while taking us inside an Internet ecosystem that may spread rapidly around the world.
For a grand tour of intelligence, both artificial and real, read on. And please, as always, let us know what you think.
A version of this article appears in the July 1, 2018 issue of Fortune with the headline “Springtime for A.I..”