CEO DailyCFO DailyBroadsheetData SheetTerm Sheet

China’s online propaganda isn’t very good

September 8, 2021, 5:11 PM UTC

There’s a panic today over disinformation — whatever that is

The last two elections were lousy with it. Meanwhile, a California man last month killed his two children because he believed in a QAnon-linked conspiracy that they were actually lizard people. And social media users, from every conceivable coordinate on the political spectrum, seek to cast inconvenient news as a psyop.

Twitter, as the saying goes, isn’t real life. And new research shows that fake information may be more of a boogeyman than a true threat. 

Out today is a new report from cybersecurity firm Mandiant about China’s online propaganda push. China, it should go without saying, is a very sophisticated state actor, one the White House recently accused of “irresponsible and destabilizing behavior in cyberspace” for allegedly hacking Microsoft and engaging in other nefarious behavior. 

The Mandiant report focuses on how China uses one particular form of soft power: propaganda on social media. Researchers Ryan Serabian and Lee Foster show that its network of people posting disinformation to the outside world is far larger than previously thought, spanning 30 social media sites — including Facebook and YouTube — and 40 other sites, reaching speakers of Russian, German, Spanish, Korean, and Japanese. 

“Collectively, these observations suggest the actors behind this campaign have significantly expanded their online footprint and appear to be attempting to establish a presence on as many platforms as possible to reach a variety of global audiences,” according to the report. They are targeting Chinese dissidents, pushing a narrative that the COVID-19 pandemic originated with the U.S., and even trying to smear former Trump consigliere Steve Bannon for his criticism of China. 

So far, quite scary. 

But take a deeper look. The posts are amateurish, with obvious grammatical mistakes across languages, Mandiant reports. There were attempts to organize physical protests — echoing a notorious Russian tactic from 2016 — but nobody attended. 

It seems that even though there are thousands of accounts amplifying this information, they’re mostly talking with each other. 

The research echoes an August report from the Centre for Information Resilience on the Chinese poster network, which concludes that “much of this network’s content, as far as we have identified, has only been shared by other accounts in the network with the purpose of amplifying that content.”

So then, what’s all this fear about? Coverage about disinformation tends to focus on the susceptibility of some targeted group, like people slurping down ivermectin to treat COVID. The concern is that they’ll be swayed to vote not in their own interests, but for the benefit of some other country.

But at the heart of it, isn’t it really a fear that our thoughts aren’t our own? That we could be so easily susceptible to propaganda and division, and our most dearly-held beliefs originated in some politburo somewhere?

Of course, this has been the case for a long time. The U.S. helped perfect the art of it here and abroad, after all. The domestic media hasn’t exactly done a good job of discerning truth from propaganda, either.

China, to be sure, has other ways to flex. Access to the market there is an important part of the growth plans for countless executives. The list is long of international companies, and even celebrities, who’ve groveled in apology for sins as venial as recognizing Taiwan. China also has plenty of weapons, and the largest army in the world, in case this whole online disinformation campaign doesn’t work out.

Kevin T. Dugan
@kevintdugan

NEWSWORTHY

Off base. Coinbase received a Wells notice — an intention to bring charges — from the Securities and Exchange Commission regarding its plans to offer interest to customers for lending crypto. The notice means that the company won't launch any product until October. In a Twitter thread, CEO Brian Armstrong details the back-and-forth with the regulator. In one post that's gone viral, he asked "how can lending be a security?"

Citrix or treat. Elliott Management, the hedge fund founded by activist Paul Singer, has taken a 10% stake in software company Citrix in a bid to boost the company's stock price. Elliott had invested in the company in 2015, though its U.S. activist investing head Jesse Cohn didn't stand for re-election to the company's board in April. The company's shares are up more than 7% today. 

Moving picture. Facebook is fighting the U.K. Competition and Markets Authority over the social media giant's acquisition of Giphy last year, questioning the regulator's authority to force it to sell the company. The C.M.A. is pushing Facebook to sell the gif database over competition concerns in the display ad market. 

Byte me. ByteDance, the company that owns TikTok, is in talks with banks to raise more than $3 billion to refinance its debt and fund expansion. The money talks come after the company appears to have tabled plans to go public, and had announced that it would sell its stakes in several fintech companies amid a broader crackdown in China. 

No comment. Australia's high court ruled that media companies are responsible for the posts that people make on their pages, essentially saying comments sections encourage people to interact with the organizations. The decision could act as a preview to the fight in the U.S. to repeal or amend Section 230 of the Communications Decency Act, which protects online publishers from legal liability for their users' comments. Publishers like Rupert Murdoch's News Corp. are seeking to change the law. 

FOOD FOR THOUGHT

A.I. limits. For years, journalists, activists, and consumer groups have been raising the ethical problems of artificial intelligence, and how the burgeoning technology can reinforce racist and sexist divisions in society. Silicon Valley, it appears, is starting to listen. Reuters reports that ethics panels at companies like Google have been rejecting offers to use their technology for lenders and other companies where the pattern-matching program could run afoul of legal and ethical lines. This is all great, but doesn't this all seem too little, too late?

From the article:

In September last year, Google's  cloud unit looked into using artificial intelligence to help a financial firm decide whom to lend money to.

It turned down the client's idea after weeks of internal discussions, deeming the project too ethically dicey because the AI technology could perpetuate biases like those around race and gender.

Since early last year, Google has also blocked new AI features analyzing emotions, fearing cultural insensitivity, while Microsoft restricted software mimicking voices and IBM rejected a client request for an advanced facial-recognition system.

All these technologies were curbed by panels of executives or other leaders, according to interviews with AI ethics chiefs at the three U.S. technology giants.

IN CASE YOU MISSED IT

40 Under 40 by Fortune

How digital surveillance thrived in the 20 years since 9/11 by Jonathan Vanian

Germany’s ‘sovereign cloud’ is coming—and it’s provided by Google by David Meyer

As Tesla stock surges, it’s now worth as much as the next six biggest carmakers by Shawn Tully

Boeing board members will have to face lawsuit over 737 Max crashes by Jef Feeley, Julie Johnsson, and Bloomberg 

Credit scores hit a 13-year high, but big disparities remain. See how your state fares by Megan Leonhardt

Why does the ‘return to work’ make us so uneasy? by Tim Jackson

Some of these stories require a subscription to access. Thank you for supporting our journalism.

BEFORE YOU GO

Security check. Phishing emails and calls tend to be about as gimmicky as they can get — but just like any cheap trick, it's gotta work sometimes. That's the logic in this Wall Street Journal piece breaking down why hackers target people, and how they go after your "lizard brain" vulnerabilities to get access to your information. The need to avoid loss, to trust authority, and the act quickly are all major reasons why people fall for relatively unsophisticated scams. If you think you're too sophisticated to fall for it — well, that's another way they can get you. 

Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.