Just last week, machines crossed a momentous milestone. Google’s AlphaGo, a computer algorithm, beat Go world champion Lee Sedol 4 to 1 in the ancient Chinese board game.

Unlike Western chess, which consists of about 40 turns in a game, Go entails up to 200. Back in 1997, IBM IBM ‘s Deep Blue trumped chess Grandmaster Garry Kasparov by deploying a brute force approach—calculating all of the possible end games and then making an optimal choice for the next move. You can’t do this with Go. The permutation of outcomes on a 19-by-19 grid quickly compounds to a bewildering range—10761 to be exact—more than the total number of atoms of the entire observable universe. To compete, a machine needs to think more intuitively, more human-like. AlphaGo did just that.

Before AlphaGo played Go, Google GOOG researchers had been developing it to play other video games—Space Invaders, Breakout, Pong, and others. Without the need for any specific programming, the algorithm was able to master each game by trial and error—pressing different buttons randomly at first, then adjusting to maximize rewards. Game after game, the software proved to be cunningly versatile in figuring out an appropriate strategy, and then applying it without making any mistakes.

Among computer scientists, such is a general-purpose algorithm, capable of self-learning and tackling different problem domains. This is made possible by building a network of hardware and software that mimics the web of neurons in the human brain. In this sense, computers can be programmed to seek positive rewards that come in the form of scores.

AlphaGo continually reinforces and improves its performance by playing millions of games against tweaked versions of itself. Displaying wisdom of its inner-workings, during the 37th move in the match’s second game, AlphaGo made a surprise move that flummoxed Lee Sedol.

But such breakneck advances naturally prompt existential questioning. Elon Musk said artificial intelligence could be “potentially more dangerous than nukes,” and likened it to “summoning the demon.” Apple cofounder Steve Wozniak went further still: “…The future is scary and very bad for people,” he argued. “Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on?”

 

Such questions, as odd as they sound, are all relevant, with the business impacts of artificial intelligence already evident. IBM Watson, hailed to be the first computer capable of understanding natural human language, showed how machine learning can go beyond just games and trivia. By digesting millions of pages of medical journals and patient data, Watson provides recommendations—from blood tests to clinical trials—to physicians. A cancer doctor only needs to describe a patient’s symptoms to Watson in plain, spoken English over an iPad application.

Both Watson and AlphaGo are showing that the ability for human experts who masterfully recognize patterns will soon turn cheap. Analytical decisions—like when an art expert is able to “sense” that he’s looking at a forgery, or when a medical specialist develops a “clinical glance” that allows her to make diagnosis almost as soon as a patient walks in—are exactly the sorts of human advantages that will disappear first. Similarly, analyses currently done on any excel spreadsheet, like supply-chain coordination, marketing-dollar spread, or tactical resource allocation, will soon be easily mastered by machines. Ken Jennings, the former champion who lost to IBM Watson in the game show Jeopardy! in February 2011, said, “Just as factory jobs were eliminated in the 20th century by new assembly-line robots, [we] were the first knowledge-industry workers put out of work by the new generation of ‘thinking’ machines.”

So there is a threat to many jobs, but the good news is that there will always be plenty of new jobs that reemerge. When textile machines decimated the entire industry of weaving and spinning, 20th-century workers went on to build more complex goods: automotive, aircraft, and consumer electronics. Now that the routine white-collar work is being automated, occupations that demand higher forms of human empathy and long-range planning will be the new frontier. Already, we have seen the evangelization of design thinking for almost a decade, where business managers are encouraged to embrace a human-centric perspective when conceptualizing new product offerings—from liquid detergent to MRI scanners and insulin syringes. The acceleration of evermore user-friendly products can only be achieved when a substantial part of the office work is automated, thereby releasing the human brain for higher purposes.

In this changing flux, IT experts will have to ask themselves if they are cutting-edge enough to help companies automate human intuition faster than competitors.

And those in non-IT roles will need to question whether their current job is specialized enough to not become vulnerable in the age of machines.

In the direction the world is headed, everyone will need to rethink their professional existence to ensure they have a broad prospective of where they could integrate different domain knowledge in their career track in a creative way.

As Mary Kay Ash said, “There are three types of people in this world: those who make things happen, those who watch things happen and those who wonder what happened.” Let us not become the last.

Howard Yu is a professor of strategic management and innovation at IMD, a business school based in Lausanne, Switzerland. Yu does not have investments in any of the above companies mentioned.