CEO DailyCFO DailyBroadsheetData SheetTerm Sheet

What will A.I. make possible by 2041? Technologist Kai-Fu Lee has some ideas

September 14, 2021, 4:48 PM UTC

Kai-Fu Lee, the computer scientist who once ran Google’s operations in China before becoming a prominent tech investor, wrote AI Superpowers in 2018. It remains a must-read for its trenchant analysis of how artificial intelligence will likely reshape geopolitics and global business.

Today, Lee is releasing a new book, published in the U.S. by Penguin Random House, laying out his vision of our A.I.-enabled future. Called A.I. 2041 the book is an unusual collaboration between Lee and Chen “Stan” Qiufan, an award-winning Shanghai-based science-fiction writer who once worked for Lee at Google.

Rather than presenting a straightforward analysis, the new volume is organized around 10 factual essays, written by Lee, each explaining a different aspect of the technology and its potential impact over the next two decades. Each of these is then paired with a short work of speculative fiction by Qiufan, each set in a different country, with narratives centered around the same themes Lee identifies. The effect is a bit like reading an issue of McKinsey Quarterly and then watching an episode of the dark sci-fi TV series Black Mirror.

The book is worth reading, even for those familiar with many of the A.I. trends Lee describes. That’s a testament to Qiufan’s contribution: His stories are intriguing, haunting and moving. Last week, I interviewed both Lee and Qiufan about the book. Lee told me he had decided to collaborate with a science fiction writer because he thought he would be able to reach a larger audience that way. “The stories can really vividly describe what 20 years will be like,” he said. “People will get drawn in by the stories to the technologies.” 

Among Lee’s many predictions for 2041 are:
•A.I. will enable precision medicine, with doctors often simply rubber-stamping diagnoses the software makes. But this will free doctors to spend more time being compassionate caregivers.

•A.I. will play a key role in the discovery of new medicines, and will also help enable other revolutionary technologies such as robotic surgery and nanorobots that could travel within a patients’ body to deliver treatment. Together, these advances will extend average human lifespans in the developed world by as much as 20 years.

•A.I. will transform many financial fields, including insurance underwriting by taking over most market-based trading in financial instruments such as stocks and commodities. But humans are likely to remain essential in less liquid areas that revolve around negotiation and higher-risk dealmaking, such as venture capital and commercial real estate.

•A.I. will revolutionize education, providing tailored learning for most children so that human teachers can devote more of their time to instilling skills such as critical thinking.

•Household robots will free us from many of the mundane tasks of cleaning and making deliveries.

•Autonomous vehicles will become commonplace in most advanced cities.

But there are a few things that Lee isn’t predicting. For instance, he doesn’t think scientists will be able to create human-like “artificial general intelligence” (AGI) that can perform the same breadth of tasks a person can. Creating AGI would take dozens of big computer science breakthroughs, according to Lee. He argues that there’s only been one big A.I. breakthrough (getting deep neural networks to actually work) in the past 30 years. Expecting dozens in the next twenty years, he says, is too optimistic.

Many of Qiufan’s stories in the book have a decidedly dystopian edge. A few deal with explicitly negative consequences of A.I., such as the use of deepfakes for political disinformation, and the belated scramble by governments to control the spread of lethal autonomous weapons. But I found even the stories that were ostensibly about more positive aspects of A.I., such as how it may create smarter insurance underwriting, lowering costs and encouraging healthier lifestyle choices, disquieting. In most cases, the characters in the stories had to surrender a lot of personal data and, more importantly, sacrifice a not insignificant amount of autonomy to reap the benefits of A.I.

Qiufan laughed when I asked him about this. “Compared to other stories I wrote before this is the brightest thing I have ever written,” he says. Some amount of tension is necessary to create narrative drama and for character development. “I tried to create authenticity and nuance,” he says. “I think a little bit of downside, a little bit of dark side, is necessary, but we also give out the hope that we can really leverage A.I. as a super powerful tool.”

Lee pointed out that my concern about the loss of autonomy and privacy that many of the characters experienced in the stories might be a particularly American disquiet. “Another group of people might read it differently,” he says. In particular, to people from Asian cultures, which tend to be more collectivist, the stories may seem less disturbing, he says.

The technologist also took the opportunity to correct a misperception of his earlier book, AI Superpowers. In that book, Lee laid out the relative strengths the U.S. and China possessed in the three pillars of A.I.—data, computing power, and algorithms—and how this was likely to position them as world leaders in the technology. But some have credited Lee’s book with predicting or even inadvertently helping to fuel the A.I. arms race between Beijing and Washington. Lee says that was never his intention and is a misinterpretation of that book’s thesis. He merely wanted to highlight how companies in each region could leverage unique strengths to become dominant in A.I. It was about business competition and not the competition between nation-states. He says the geopolitical tension between the U.S. and China could, by preventing the free exchange of ideas and expertise, as well as limiting capital for technology, actually harm the entire world by slowing A.I.’s development.

And with that, here’s the rest of this week’s A.I. news, brought to you this week with help from my “Eye on A.I.” co-author Jonathan Vanian.

Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com

A.I. IN THE NEWS

The U.S. and Israel A.I. connection. A new bill called the United States—Israel Artificial Intelligence Center Act would authorize $10 million to fund a government-sanctioned A.I. research center that would benefit both countries, according to a report by news service Nextgov. The center would include members of the private sector and academia in both the U.S. and Israel, and focus on research related to image classification, object detection, speech recognition, data labelling, and machine-learning explainability, the report said. 

The United Kingdom’s A.I. and privacy dilemma. United Kingdom officials are considering “rewriting or deleting” a section of a data protection regulation for the European Union that would require human review of certain algorithmic decisions, the Financial Times reported, citing unnamed sources. That section, known as Article 22, ensures that people would be able to seek human reviews of automated decisions involving sensitive tasks like awarding loans or hiring.

DeepMind’s uneasy ties with Google. Employees at A.I. research firm DeepMind pondered a separation from parent company Google and even registered “a new company called DeepMind Labs Limited, as well as a new holding company,” according to a report by Business Insider. The article probes concerns some DeepMind employees had about Google and parent company Alphabet, particularly over Google’s now-abandoned “Project Maven” Pentagon A.I. contract. A DeepMind spokesperson told the publication that “DeepMind's close partnership with Google and Alphabet since the acquisition has been extraordinarily successful.”

An A.I. nightmare comes to life. A researcher who studies deepfakes, which are A.I. generated photos and videos that look real but are fake, has discovered a website that lets people create deepfake pornography using photos of people’s faces, according to a report by MIT Tech Review. Thankfully, the website “exists in relative obscurity” and doesn’t appear to generate accurate deepfakes, MIT Tech Review wrote (the publication did not reveal the website’s actual name). But the mere existence of the website is alarming, considering that “between 90% and 95% of all online deepfake videos are nonconsensual porn, and around 90% of those feature women. 

EYE ON A.I. TALENT

Apple chose Kevin Lynch to lead its self-driving car project, Bloomberg News reported. Lynch joined Apple in 2013 and previously worked at Adobe.

VMWare has named Kit Colbert to be the IT software firm’s chief technology officer. Colbert is a longtime VMWare employee, having joined the firm in 2003. Over the years, Colbert has contributed to the development of the company’s core vSphere server and data center management software.

Sema4 has picked Gustavo Stolovitzky to be the healthcare tech firm’s chief science officer. Stolovitzky has spent years applying data analytics to biology and worked at IBM Research for 23 years where he also became an IBM Fellow, the company’s highest technical appointment.

EYE ON A.I. RESEARCH

A.I. could help solve the bot problem. Researchers from the University of Plymouth, the University of Portsmouth, and the National and Kapodistrian University of  Athens, Greece published a paper about using deep learning to automatically detect Twitter accounts that are controlled by bots and not humans. The researchers’ deep-learning system, which also used graph database technology to map correlations between data points, had a 92% accuracy when identifying automated Twitter bot accounts. 

The paper was accepted for publication at the 20th International Conference on Next Generation Wired/Wireless Advanced Networks and Systems.

A.I.’s role in spreading disinformation. A report authored by groups including the Anti-Defamation League, Decode Democracy, and Mozilla, examines the role of A.I. in helping to amplify and spread disinformation across social media platforms. The report details how “AI and ML-based tools used for ad-targeting and delivery, content moderation, and content ranking and recommendation can spread and amplify misinformation and disinformation online.”

FORTUNE ON A.I.

Are we finally ready for smart glasses?—By Declan Harty

How digital surveillance thrived in the 20 years since 9/11—By Jonathan Vanian

Intel CEO says ‘big, honkin’ fab’ planned for Europe will be world’s most advanced—By Christiaan Hetzner

Germany’s ‘sovereign cloud’ is coming—and it’s provided by Google—By David Meyer

Unicorn startup Papaya Global nearly quadruples its valuation, eyes an IPO—By Lucinda Shen

BRAIN FOOD

A.I.’s big energy woes. Tech publication The Register examines the environmental problems posed by deep-learning systems, which require a lot of energy to train and function. Massive neural networks that can do feats like better understand human language require enormous amounts of electricity, and some experts are concerned that researchers aren’t paying attention to the energy costs of these deep learning systems. The article examines some of the hardware and software techniques researchers are experimenting with to make deep learning a more environmentally friendly task.

From the article: One of the most frequently quoted papers on this topic, from the University of Massachusetts, analysed training costs on AI including Google's BERT natural language processing model. It found that the cost of training BERT on a GPU in carbon emissions was roughly the same as a trans-American jet flight. 

Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.