Why a Russian Anti-Vaccine Trolling Operation Failed to Resonate on Twitter

August 24, 2018, 8:41 PM UTC

Russian trolls used Twitter to spread polarizing information about vaccines in an effort to sow discord among Americans, according to a new study published this week by researchers at George Washington University.

The study, which examined thousands of Twitter posts between July 2014 and September 2017, focused on an unusual hashtag campaign called #VaccinateUS that the authors were able to link to previously known Russian trolls. (Fortune writer Sy Mukherjee has examined the study and the history of using public health issues to fuel conflict in a separate, thought-provoking essay.)

These trolls, who have ties to Russian political organizations, allegedly spread propaganda and fake news on Twitter, Facebook, and Google’s YouTube in prelude to the 2016 U.S. presidential election in order to divide the country on hot-topic issues like race and politics.

While the latest study highlights how Russian outfits have increasingly used social media to toy with people’s emotions to influence their behavior, it’s also notable for the fact that most Twitter users appeared to have ignored its anti-vaccine messages.

“What we saw in the VaccinateUS hashtag is the anatomy of a failed information campaign,” said the paper’s author David Broniatowski, an assistant professor in at George Washington University’s School of Engineering and Applied Science.

Tweets that referenced the #VaccinateUS hashtag contained messages both in support of vaccinations and those against them. This was unusual, Broniatowski said, because most Twitter campaigns about vaccines are firmly for one side or the other.

The theory is that Russian trolls played both sides in order to lure casual Twitter users into sharing their own opinions, creating a snowball-down-a-mountain effect on replies.

But outside of the Russian trolls, virtually no real Twitter (TWTR) users actually responded to the messages, Broniatowski said. Generally, Russian trolls try to exploit controversial topics like religion, and race and class division, but “sometimes they get it hilariously wrong,” he said.

Broniatowski attributed the campaign’s failure to the content of the tweets, which included: “VaccinateUS mandatory #vaccines infringe on constitutionally protected religious freedoms;” “Did you know there was a secret government database of #vaccine-damaged children? #VaccinateUS;” and “Dont get #vaccines. Iluminati are behind it. #VaccinateUS.”

The messages were so far-fetched that even people who believe in conspiracy theories chose to ignore them.

“We like to think this is a brilliant puppet master that got us figured out,” Broniatowski said about the Russian outfits trying to sowing discord. “Sometimes they are just trying stuff.”

But the fact that the Russian trolls tweeted so often about vaccinations may create a skewed perception that most Americans are divided on the issue, he explained. On the contrary, Broniatowski cited a recent Pew Research Center study that showed the majority of the public is in favor of vaccines.

The challenge for social media giants like Twitter, however, is that removing Twitter accounts that spread misinformation is difficult, he said. Even if machine learning technology could automatically remove the tweets like the failed vaccination campaign, not every campaign is equal at influencing the public.

The campaigns that are more effective are far more sophisticated, Broniatowski said, and he doesn’t believe machine learning technology, at least at least currently, can accurately remove them without the human help.

More effective misinformation campaigns don’t contain messages with completely far-out correlations, as in the case of a Tweet that says something like “I don’t believe in vaccines, I believe in God,” which “doesn’t really make sense,” he said. Instead, they tap on something deeper that could be more plausible, if it wasn’t for the underlying facts.

People who believe that vaccines cause autism generally believe that vaccines are harmful and are related to other diseases and ailments. Simply programming an algorithm to delete tweets that contain the words “vaccines” and “autism” would be ineffective because people would find a way to continue tweeting about vaccinations causing other ailments.

It becomes an arms race,” he said.

Broniatowski commended Twitter in its efforts to remove or demote misinformation, although many others would disagree with how good of job it is doing. But he acknowledged the challenge the company and other social media networks face in policing their services.

Get Data Sheet, Fortune’s technology newsletter.

“It’s not an impossible problem,” he said. Not every Twitter message with false information needs to be removed, since not all of them are effective in gaining people’s attention. For the more effective ones, however, there needs to be a way to “figure out counter messages” or to demote them.

For this complex issue, computer scientists will need to work with social scientists and psychologists who understand why certain misleading messages resonate with some people more than others, he said.

Twitter has said it is increasingly working with academics to study the “health” of its service and is attempting to come up with ways to stop misleading or abusive information from spreading. The challenge, however, is that these kinds of studies tend to take a long time to complete, and must be cross-checked with other academics for verification.

While Twitter and academics try to come up with a solution, Russian trolls or others will undoubtedly continue to spread misinformation, sometimes successfully and sometimes not.

Read More

Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward