Employees may need to keep up ‘the pretense of working’ as automation spreads, says A.I. expert Kai-Fu Lee

August 12, 2021, 8:38 AM UTC

As businesses begin to automate low-level service work, companies may start creating fake tasks to test employee suitability for senior positions, says Kai-Fu Lee, the CEO of Sinovation Ventures and former president of Google China.

“We may need to have a world in which people have ‘the pretense of working,’ but actually they’re being evaluated for upward mobility,” Lee said at a virtual event hosted by Collective[i], a company that applies A.I. to sales and CRM systems. 

Work at higher levels of a company, which requires deeper and more creative thinking, is harder to automate and must be completed by humans. But if entry-level work is fully automated, companies don’t have a reason to hire and groom young talent. So, Lee says, companies will need to find a new way to hire entry-level employees and build a path for promotion.

Subscribe to Eye on A.I. for expert weekly analysis on the intersection of artificial intelligence and industry, delivered free to your inbox.

It was one of several predictions Lee made about the possible social effects of widespread adoption of A.I. systems. Some were drawn from his upcoming book, AI 2041: Ten Visions for Our Future—a collection of 10 short stories, written in partnership with science fiction author Chen Qiufan, that illustrate ways that A.I. might change individuals and organizations. “Almost a book version of Black Mirror in a more constructive format,” joked Lee, a well-known expert in the field of A.I. and machine learning and author of the 2018 book AI Superpowers: China, Silicon Valley, and the New World Order. 

Talk of A.I. and its role in social behavior often centers on the tendency of algorithms to reflect and exacerbate existing social biases. For example, a contest by Twitter to root out bias in its algorithms found that its image-cropping model prioritized thinner white women over people of other demographics. Data-driven models risk reinforcing social inequality, especially as more individuals, companies, and governments rely on them to make consequential decisions. As Lee noted, when a “company has too much power and data, [even if] it’s optimizing an objective function that’s ostensibly with the user interest [in mind], it could still do things that could be very bad for the society.”

Despite the potential for A.I. to do harm, Lee has faith in developers and A.I. technicians to self-regulate. He supported the development of metrics to help companies judge the performance of their A.I. systems, in a manner similar to the measurements used to determine a firm’s performance against environmental, social, and corporate governance (ESG) indicators. “You just need to provide solid ways for these types of A.I. ethics to become regularly measured things and become actionable.”

Yet he noted that more work needs to be done to train programmers, including the creation of tools to help “detect potential issues with bias.” More broadly, he suggested that A.I. engineers adopt something “similar to the Hippocratic oath in medical training,” referring to the set of professional ethics that doctors adhere to during their dealings with patients, most commonly summarized as “Do no harm.”

“People working on A.I. need to realize the massive responsibilities they have on people’s lives when they program,” Lee said. “It’s not just a matter of making more money for the Internet company that they work for.”

Subscribe to Fortune Daily to get essential business stories straight to your inbox each morning.

Read More

Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward