Great ResignationClimate ChangeLeadershipInflationUkraine Invasion

The perils of letting social media titans correct misinformation

October 22, 2020, 6:33 PM UTC
Commentary-Twitter On Broken Glass
In attempting to fight misinformation about Joe Biden and Ukraine, social media giants Facebook and Twitter became the story, write Sarah Kreps and Douglas Kriner.
Jakub Porzycki—NurPhoto/Getty Images

With the election imminent, social media platforms find themselves once again in the awkward position of being the referee deciding whether to call a penalty kick in overtime. 

Especially since this summer, social media platforms have also taken the unprecedented steps of flagging and even deleting false or misleading claims by politicians, including most recently a post by President Trump claiming that COVID-19 was “less lethal” than the flu.

Platforms are also taking aggressive action to remove inauthentic accounts and reduce the spread of misinformation that social media propagated in 2016. The recent case of whether and how to allow a New York Post story on their sites brings the complicated politics of content moderation into sharp relief. 

Last week, the Post published a story allegedly revealing secret emails about a meeting between Vice President Joe Biden and an adviser to a Ukrainian energy company. Social media giants moved aggressively against the story—and in so doing became the story. Facebook limited distribution of the Post’s main story while Twitter blocked users from posting pictures of the emails, causing users who tried to share links to see a message saying that the photos were obtained through hacking and contained private information.

Within a day, CEO Jack Dorsey confessed that Twitter’s communication about the decision had been “not great”—blocking links without offering context was “unacceptable.”

The Post brouhaha exposes the potential perils of draconian action. One argument against such measures is normative: that deciding what can be shared and what is prohibited is tantamount to censorship. In this case, Twitter’s moderation response was more aggressive than its published policies, which made it harder to generate a patina of legitimacy in this particular instance. When it did reverse course, Dorsey explained—not entirely convincingly and thereby playing into the hands of critics—that the initial moderation choice was due to the Post story sharing personal information, rather than its unsubstantiated claims.

Another argument against is more tactical: Trying to correct misinformation can actually generate a backfire effect, further entrenching misperceptions among those predisposed to believe the false claim. Some research suggests when people are confronted with information that conflicts with their pre-existing views, they counterargue in their minds with such vigor that they end up with more extreme views than prior to the correction.

During the COVID-19 pandemic, platforms have experimented with different correction modalities with an eye toward correcting misinformation without generating a backfire effect. But perhaps these concerns are overblown. Research on the backfire effect is mixed, and some studies suggest it happens rarely

But how should platforms act when political officials—even the President—are major spreaders of false claims? Will corrections work? Will they have no effect? Or worse, will they backfire?

To find out, we conducted a survey experiment within days of Twitter flagging a presidential tweet as false for the first time—Trump’s May 26 tweet alleging, without evidence, widespread mail balloting fraud. Some of our subjects saw only Trump’s original tweet. Others saw both the tweet and a correction—either a simple “nudge” urging voters to “get the facts about mail-in ballots,” a more robust correction, or an even stronger correction that provided more information rebutting Trump’s claim. Finally, subjects in the control group did not see either the President’s tweet or a correction. 

When looking at all of our survey respondents, neither Trump’s tweet nor any of the corrections to it had much of an effect on public beliefs about whether mail fraud occurs, how much fraud there is in U.S. elections, or voting by mail. 

However, the corrections had dramatically different effects on Democrats and Republicans. Providing the facts decreased concerns about fraud among Democrats, but increased them among Republicans—a classic backfire effect. 

As the election draws closer, social media companies face increasing pressure to fact-check and push back against false or misleading claims posted by politicians on their platforms. But these efforts to combat misinformation may not only fail—they may unintentionally further fan the flames of partisan polarization.

Sarah Kreps and Douglas Kriner are political scientists at Cornell University.