ChatGPT is a salvo in a growing generative A.I. ‘arms race’ between the U.S. and China

Nicholas GordonBy Nicholas GordonAsia Editor
Nicholas GordonAsia Editor

Nicholas Gordon is an Asia editor based in Hong Kong, where he helps to drive Fortune’s coverage of Asian business and economics news.

NICOLAS ASFOURI/AFP via Getty Images)

Hello from Hong Kong! Nicholas Gordon here, filling in for Jeremy this week.

I admit to feeling some mild jealousy when I see people talking about everyone’s favorite new A.I. toy, ChatGPT. Unfortunately, I’ve not been able to test the new program for myself. OpenAI has not made the service available in either mainland China or Hong Kong—though Chinese internet users have already found ways around that barrier.

Still, ChatGPT has made the same splash in conversations on this side of the Pacific Ocean, with speakers and conference attendees wondering how this new technology might revamp the country’s tech sector.

It’s clear that Chinese researchers and companies are taking A.I. seriously. A study from Nikkei Asia, conducted in partnership with Dutch publisher Elsevier, reports that China produced twice as many A.I.-focused papers than the U.S. in 2021. Chinese papers were also in the top 10% of citations by other papers, with a tally 70% greater than those from the U.S. And three Chinese tech giants—Tencent, Alibaba, and Huawei—are among the top ten companies driving A.I. research. 

The study only covers 2021, however. And ChatGPT’s fame—at least here in Hong Kong—shows that U.S. developers can still drive the tech conversation, despite all the talk of decoupling.

Chinese companies are developing their own generative A.I. programs, of course. And users report that some Chinese products handle some things better than their U.S. competitors. ERNIE-ViLG, an A.I. image generator developed by Baidu, is reportedly better at producing images that use Chinese cultural imagery or anime stylings than U.S. counterparts like OpenAI’s DALL-E.

Yet these same Chinese A.I. programs fall into some of the same pitfalls, specifically around bias, as U.S.-developed programs. Chinese social media users had a field day with one Tencent-designed filter—which converted photos into an anime-styled image—after discovering that it was terrible, if not outright racist, when handling plus-size individuals and people of color.

And there are issues unique to Chinese programs: The MIT Technology Review notes that Baidu’s ERNIE-ViLG won’t let users generate images of Tiananmen Square.

China is taking the lead on some A.I.-focused regulations. Beijing last week imposed some of the world’s first restrictions on “deepfakes,” or the use of A.I. and machine learning technology to impersonate people’s image and voice.

“China is learning with the world as to the potential impacts of these things, but it’s moving forward with mandatory rules and enforcement more quickly,” Graham Webster, who leads Stanford University’s DigiChina project, told the Wall Street Journal last week. “People around the world should observe what happens.”

Beijing has embraced A.I. development, with policymakers making it one of the country’s 2023 economic priorities at the Central Economic Work Conference last December. And artificial intelligence is a key part of the country’s military strategy, like unmanned vehicles, information processing, and military decision-making.

China’s focus on A.I.—even if it’s still theoretical rather than fully realized—is partly why the Biden administration passed its widespread rules on chip exports last October. The U.S. leaned on companies like Nvidia to stop selling its most advanced graphics processors (which are useful for machine learning) to China. And in December, the U.S. barred companies from selling any chips made using U.S. chipmaking equipment to over 20 Chinese A.I. companies. 

Beijing appears to have been put on the back foot after Biden’s chip controls. It has yet to retaliate, instead trying to help Chinese chip companies comply with U.S. regulations where it can. Beijing is also reportedly pausing its investments and subsidies to build a giant chip industry, after failing to produce a local champion that can compete with U.S.-developed products. 

Regardless, the fight between Washington and Beijing is shaping up to be, in the words of Wedbush analyst Dan Ives, an “A.I. arms race.” That competition is also why Ives called Microsoft’s potential $10 billion investment in OpenAI a “smart poker play.”

In the meantime, I’ll continue waiting to see if I ever get to use ChatGPT—or if I’ll have to turn to a Chinese-developed competitor. 

Nicholas Gordon
@nickrigordon
nicholas.gordon@fortune.com

A.I. IN THE NEWS

Microsoft is opening its Azure OpenAI service to all businesses, according to a post from the company on Monday. That would give Microsoft’s cloud customers access to tools like image generator Dall-E and code writing Codex. (ChatGPT isn’t available yet, but Microsoft says it’s coming “soon.” As Fortune's Jeremy Kahn and Jessica Mathews have reported, Microsoft has already invested $3 billion in OpenAI—and is in discussions about investing another $10 billion.

It turns out training your A.I. on Twitch chat is a bad idea. In December, “Neuro-sama” debuted as a so-called Virtual Youtuber, or VTuber. (VTubers are live streamers who perform behind a virtual avatar, rather than show themselves on camera). But Neuro-sama’s programmer claimed that everything the streamer did—playing Minecraft, topping rhythm game leaderboards, and bantering with viewers—was A.I.-generated, and the streamer picked up tens of thousands of Twitch followers. But soon clips emerged of the streamer making controversial statements–much like other chatbots trained on internet speech. Twitch suspended Neuro-sama last week, allegedly for “hateful content.” Kotaku

CNET confirmed that it’s using artificial intelligence to write some of its articles after eagle-eyed readers noticed that the website was attributing some pieces to “automation technology.” Editor-in-chief Connie Guglielmo confirmed that the outlet started using A.I. assistance for basic explainers “to see if the tech can help our busy staff of reporters and editors with their job to cover topics from a 360-degree perspective.” Guglielmo said all A.I.-generated content is reviewed and fact-checked by a human editor. (Other outlets, like the Associated Press, also use automation to write stories on business and financial topics). CNET

“I would advocate not moving fast and breaking things,” warns Demis Hassabis, the CEO of DeepMind, the other big A.I. development house, in an interview with Time. DeepMind made a name training A.I. models to win games like Go, but the rise of OpenAI may be forcing the Google-owned developer to act faster. Yet Hassabis warns that, in the wrong hands, A.I. guided by unsupervised learning could be dangerous. Developers “don’t realize they’re holding dangerous material,” he says. Time

EYE ON A.I. RESEARCH

ChatGPT’s ability to create—in the words of Fortune CEO Alan Murray–"informed bullshit" is fooling even scientific reviewers. In a preprint study from late December, researchers found that human reviewers couldn’t recognize an A.I.-generated scientific abstract a third of the time. Worse, reviewers believed that 14% of human-written abstracts were generated by a computer. 

That means ChatGPT can write "believable scientific abstracts,” Catherine Gao, a professor at Northwestern University, wrote in the study.

“If we’re now in a situation where the experts are not able to determine what’s true or not, we lose the middleman that we desperately need to guide us through complicated topics,” Sandra Wachter, a professor of tech and regulation at Oxford University, told Nature

Using computers to fool scientific reviewers is a decades-old practice. In 2005, authors submitted an entirely-nonsense paper, generated using the program SciGEN, to a scientific conference. It was embarrassingly accepted, and soon sparked a wave of fraudulent submissions generated using SciGen. 

FORTUNE ON A.I.

There’s a brewing ‘AI arms race’ and Microsoft’s ChatGPT play is ‘a potential game changer,’ Wedbush’s Dan Ives says — by Will Daniel

Bill Gates dismisses the inflation and economic doom and gloom to insist now is ‘dramatically’ the best time to be alive  — by Tristan Bove

Should A.I.-generated deepfakes be labeled? It’s the law in China now—and an expert says that we can all learn from what happens next — by Steve Mollman

A.I. bot ChatGPT tried to write a Nick Cave song. The singer says it ‘sucks’ and is a sign the ‘apocalypse is well on it’s way’ —by Alice Hearing

Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.