CEO DailyCFO DailyBroadsheetData SheetTerm Sheet

China’s Big Tech built some of the world’s most powerful algorithms. Beijing just exposed some closely-guarded details online

August 16, 2022, 4:15 PM UTC
china-meituan-crackdown-beijing-algorithms
Food delivery drivers for online shopping platform Meituan stand in formation before starting their work along a street in Beijing on October 19, 2021.
Jade Gao—AFP/Getty Images

Grady McGregor here in Hong Kong, filling in for Jeremy.

Late last week, Chinese regulators publicly shared details on 30 algorithms that power some of the country’s most widely used apps and websites, an unprecedented measure that marks a new escalation in Beijing’s years-long campaign to rein in the power of big tech.

The list of algorithms included details on the underlying technology that power apps from China’s largest internet companies, including e-commerce firm Alibaba, social media company and TikTok-owner Bytedance, and delivery giant Meituan. The A.I.-driven recommendation algorithms are highly valuable trade secrets that have come to govern many parts of day-to-day life in China, determining what videos people watch, the products they buy, and routes that food delivery workers operate.

The fact that the Cybersecurity Administration of China (CAC), the country’s tech regulator, released the list to the public was unusual—and not just for China.

“I’m not aware of any other country in the world that has a public facing list of every piece of code that manages your [online] decisions,” says Kendra Schaefer, head of China tech policy research at Trivium China.

It’s been clear that Chinese tech companies would need to share some details of their algorithms with authorities since at least March. That month, China’s government implemented a sweeping new law governing recommendation algorithms. The new law banned any algorithm that “might threaten national security, social stability or induce user over-indulgence or wanton consumption,” says Angela Zhang, associate professor at the University of Hong Kong’s law school.

One of those new rules required Chinese tech companies to register their algorithms with the CAC, which explains why the companies had released at least some technical details to authorities. While the information that the government has since released to the public does not include the actual computer code underlying the various algorithms, it’s not clear if Chinese firms have handed over more granular, code-level details that the government has not released. “How much access the CAC ends up getting to that code is a big question mark,” Schaefer says.

For now, Beijing’s crackdown on the recommendation algorithms appears equal parts populist and draconian—Beijing wants to grant consumers more rights to their information online as long as the government ultimately retains more power over the industry. “Half of the regulations are focused on very forward-looking, very positive consumer rights issues,” says Schaefer. “Half of the regulations are focused on tightening content control online.”

Big tech companies, meanwhile, appear to be clear losers of Beijing tightening the algorithm rules, and the government’s approach raises questions about the broader future of A.I. development within the world’s most populous nation.

Matt Sheehan, a fellow at the Carnegie Endowment for International Peace specializing in Chinese A.I. policy, says that Beijing has not completely soured on A.I. Rather, he says authorities are aiming to re-align uses of A.I. with the government’s larger socio-political goals, even if that means weakening consumer-facing algorithms.

“A.I. is woven throughout China’s economy… [so authorities are now] regulating the actual operation of these algorithms to ensure that they are pushing in the direction that they want,” he says.

And, at the same time that Beijing is cracking down on consumer-facing technology firms that deploy A.I., the government has prioritized ‘deep tech’ applications of A.I. such as in robotics and industrial applications, says Sheehan.

Investors and the companies themselves have begun to follow the political winds. Last month, the Financial Times reported that Neil Shen, the influential Chinese investor and founder of Sequoia China, planned to deploy a $9 billion new fund to supporting artificial intelligence and other technologies like semiconductors while he continues to divest from consumer-facing companies that use A.I. like Meituan. Social media giant Tencent, meanwhile, has stepped up its own ‘deep tech’ efforts in the past year, putting billions into new A.I. and robotics investments.

“There’s plenty of ways to use A.I. that are perfectly in line with the Chinese Communist Party’s vision for the economy and society,” he says. “Companies are trying to strategically fall in line with with those priorities.”

Here’s the rest of this week’s news in A.I.

A.I. IN THE NEWS

TikTok launched a text-to-image A.I. generator. Called ‘A.I. Greenscreen,’ the recently added feature allows users of the short-form video platform to create unique, computer-generated images after typing in a text prompt. The inclusion of the text-to-image A.I. software on TikTok's platform marks a rapid expansion of a technology that is less than two years old. In early 2021, artificial intelligence research laboratory OpenAI released the original text-to-image platform called DALL-E, which became popular as users experimented in making wholly original pieces by simply typing in words like flower or chair. TikTok's 'A.I. Greenscreen' feature is still primitive compared to DALL-E, Google's Imagen, or Midjourney's eponymous platform, according to the Verge, with many prompts producing abstract images. But that may be a good thing due to the potential that users might abuse the technology and make explicit or hateful images, the Verge notes.

Chinese A.I. giant SenseTime is selling a new robot that plays Chinese chess. The SenseRobot sells for $299 (or $368 for a pro version) in China and comes equipped with a mechanical arm, camera, and chess board. The move marks SenseTime's first foray into the consumer market, after the firm has spent years struggling to reach profitability and fighting political battles. SenseTime develops commercial A.I. technologies used in surveillance cameras, self-driving cars, and facial recognition cameras. The U.S. government alleges that at least some of SenseTime's technology has been used to perpetuate human rights violations in China's western Xinjiang province. SenseTime has denied the allegations. Now, the firm may hope that a chess-playing robot can soften its image. "We hope to create a robot product that can truly think and act through innovative and leading A.I. technology, allowing industrial-grade A.I. technology to enter thousands of homes and interact with children and elders in a real way,” SenseTime chairman Xu Li, tells SCMP.

Codelco uses A.I. to get more copper from aging mines. In 2020, Chilean mining firm Codelco introduced a new digital data center that uses machine learning to aid in mining copper. The use of A.I. helps Codelco optimize the processing of extracted ore, essentially helping the firm get more value out of the material the firm already mines. Codelco tells Bloomberg that A.I. is now helping it mine 8,000 more metric tons per year of copper, translating to a $80 million boost in annual profits. Chile holds the world's largest copper reserves, but Codelco's use of A.I. in copper mining has helped it battle against the fact that the quality of Chilean ore has been deteriorating in recent years.

The U.S. government passed The CHIPS and Science Act. The headlines regarding the new law were mostly about semiconductors because the legislation allocates $52 billion to promote semiconductor manufacturing industry in the U.S. But the CHIPS and Science Act also includes roughly $200 billion for research into A.I. and other critical emerging technologies. The Wall Street Journal's editorial board argued that the investment would only create a more bloated government bureaucracy, but U.S. President Joe Biden said the investment could make the U.S. more globally competitive for years to come. “This bill is about more than chips. It’s about science as well…this increased research and development funding is going to ensure the United States leads the world and the industries of the future, from quantum computing to artificial intelligence to advanced biotechnology,” Biden said about the new law.

EYE ON A.I. TALENT

Cybersecurity firm Vectra A.I. hired Myrna Soto as a key strategic advisor to the company’s leadership team. Soto was most recently the chief strategy officer of technology provider Forcepoint. Vectra AI uses A.I. tools to detect and respond to cyber threats for companies that use hybrid or multiple cloud platforms.

Corvus Insurance promoted Madhu Tadikonda from president to CEO, taking over from Corvus founder Phil Edmundson who will become chair of Corvus’ board. The cyber insurance firm offers commercial insurance policies powered by A.I.-driven risk tools, and recently expanded into the U.K. and Germany.  

Cybersecurity firm NetWitness hired Ken Naumann as its new CEO. Naumann was previously CEO of data forensics firm AccessData. Netwitness offers a range of A.I.-driven cybersecurity solutions to detect and eliminate digital threats.

EYE ON A.I. RESEARCH

Boston Dynamics and Hyundai Motor Group are launching a new Boston Dynamics AI Institute. The two companies made the announcement via joint press release on Friday, saying they would together invest $400 million to get the institute off the ground. The A.I. institute has not provided many clues on specific projects it will focus on, but the release said it will focus on four core areas including cognitive A.I., athletic A.I., organic hardware design, and ethics and policy. On a newly created website, Marc Raibert, the executive director of the new institute, pointed towards more lofty goals. "We need to make robots smarter, more agile and dexterous, and generally easier to use — more like people. Once we do that, robots and other types of intelligent systems will increase productivity, free people from dangerous work, care for the disabled, and generally help people live better lives," he writes

FORTUNE ON A.I.

The U.S. is worried it will lose its scientific edge to China. By one new measure, it already has—by Nicholas Gordon

‘One of the most dangerous and irresponsible actions by a car company in decades’: Activist Ralph Nader urges regulators to recall Tesla’s self-driving technology—Tristan Bove

Elon Musk wrote an op-ed for China’s internet regulator, seeking ‘like-minded Chinese partners’ and pitching Tesla Bot as an aid in its population crisis—Nicholas Gordon

United Airlines bets $10 million on flying taxis—Chris Morris

Inventory issues are hurting the bottom line. It’s time for a hybrid approach to supply chains—Kal Raman

BRAINFOOD

Injecting A.I. into social science is a no-brainer, right? Indeed, researchers have come up with ground-breaking new findings after using machine learning methods for the first time, claiming that A.I.-driven pattern recognition has revolutionized fields like political science and psychology. One study claimed that artificial intelligence allowed researchers to predict when a civil war would break out with 90% accuracy, a 20% improvement on traditional statistical methods.

But some skeptics like Princeton professor Arvind Narayanan, and his PhD student Sayash Kapoor, have begun to question the A.I.-assisted findings. Kapoor and Narayanan say they were not able to replicate the civil war finding or several others after using their own machine learning methods. They believe that the initial experiments suffered from “data leakage,” meaning that the researchers accidentally exposed some data to the algorithm before they were supposed to.

Now, they warn the misuse of machine learning in science has created a ‘reproducibility crisis’ and countless other studies could be plagued with similar issues. “The idea that you can take a four-hour online course and then use machine learning in your scientific research has become so overblown… People have not stopped to think about where things can potentially go wrong,” Kapoor tells Wired.

Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.