Last Friday China’s Internet watchdog, the Cyberspace Administration of China (CAC), published draft regulations of recommendation algorithms—the A.I.-driven codes that social media giants use to populate consumer news feeds.
“These rules are the first of their kind globally,” says Kyle Freeman, a partner at law firm Dezan Shira & Associates, noting that the proposals go far beyond the scope of the European Union’s General Data Protection Regulation (GDPR)—one of the most comprehensive policies on data management.
If passed, the new law could dramatically limit the ability of social media apps to make money, increase the level of government oversight on the back-end tech powering Internet tech giants, and set a global precedent for managing the thorny issue of A.I. ethics.
The 30-point law, which is still in the drafting stage, bans companies from using algorithms that “encourage addiction or excessive consumption” and mandates that companies must use algorithms to “actively spread positive energy.”
The CAC is unlikely to define those vague parameters. Loose definitions provide regulators broad cover to crack down on companies as they like while an ambiguous law also encourages companies to self-regulate, as they never know where the line is.
The proposed rules on algorithms are just the latest in a slew of policies the central government has levied against China’s tech giants. On Wednesday, Beijing enacted a new Data Security Law that limits how companies can use data deemed “sensitive” to national security and mandates how the data must be stored.
In November, another law—the Personal Information Protection Law (PIPL)—will begin limiting how companies can handle consumer data. The proposed algorithm regulations also stipulate measures companies must implement to give users more control over their data.
According to the guidelines published Friday, users will be able to alter and delete “keywords” used by the algorithm to identify them and will be able to turn off the recommendation system altogether, if necessary.
“From a consumer user perspective, I think the regulations are generally a good thing,” Freeman says. However, he notes that giving users more control over their data might alter the quality or pricing of the product’s services—many of which are currently free.
Fundamentally, recommendation algorithms are designed to encourage addiction and excessive consumption. Social media apps—like TikTok, the world’s most downloaded app—use the smart software to push captivating content onto a user’s feed and then monetize user engagement by selling advertising, which drives consumption.
“ByteDance’s entire business is based on recommendation systems,” says Lian Jye Su, principal analyst at global tech market advisory firm ABI Research, referring to TikTok’s Beijing-based parent company.
Analysts often cite ByteDance as the global leader in recommendation algorithm tech, which it pioneered with its first product, news aggregator Toutiao, in 2012. ByteDance has leveraged its expertise in content recommendation to become the world’s most valuable startup, with a private valuation of $140 billion. However, Su says the law won’t “be the end” of companies like ByteDance, adding that they will be able to find alternative business models if necessary.
ByteDance declined to comment on how the proposed governance would alter its business model.
The draft regulation is open for public comment until Sept. 26. Businesses have an opportunity to lobby regulators on aspects of the bill they feel are too limiting, but regulators might not listen.
“Regulators have taken an aggressive stance to the Chinese tech industry, so it will be interesting to see how open they are to tech input,” Freeman says.
Power to the people
While Beijing is handing greater control to consumers, regulators are also seizing more power for themselves.
The algorithm law would require companies operating algorithms that can influence public opinion—such as China’s Twitter-like Sina Weibo, for example—to have those algorithms approved by the CAC. Companies without approval risk being shut down and fined.
The CAC would check that the relevant algorithms are spreading, in its own words, “mainstream values.” But it’s unclear how the government will be able to effectively assess an algorithm’s intended function.
“This is not like a car safety requirement where you can literally run a car into the wall and see whether it meets safety standards,” Su says.
Analyzing an algorithm will require a level of technical expertise that the government probably doesn’t possess, since the best code developers will be snagged by the tech industry. Then, once an algorithm passes review, it will be hard for the government to monitor how the algorithm functions in the real world.
“This is why A.I. governance—which numerous governments talk about—has always been very tricky to achieve,” Su says.
If Beijing is able to overcome these hurdles then its new laws on algorithm governance might provide a pivotal case study for other governments struggling to rein in their own domestic tech giants.
More must-read business news and analysis from Fortune:
- There’s a huge risk to the U.S. economy right now—and almost no one is paying attention
- Why we won’t see a housing market crash anytime soon
- Beijing scrubbed one of China’s most famous actors from the Internet
- CEOs promised to take pay cuts during the pandemic. Did they?
- Where Biden stands on student loan forgiveness
Subscribe to Eastworld for insight on what’s dominating business in Asia, delivered free to your inbox.