Bad actors will use Instagram images, WhatsApp messages, and tech-altered videos to spread disinformation to potential voters during the 2020 U.S. Presidential election.
The prediction comes from a report by New York University's Stern Center for Business and Human Rights, which released a report on Tuesday about political disinformation. One conclusion is that social media services other than Facebook, YouTube, and Twitter, the primary targets of Russian meddling in the 2016 elections, will be a key target of people seeking to manipulate voters.
“Instagram hasn’t received as much attention in the disinformation context as Facebook, Twitter, and YouTube, but it played a much bigger role in Russia’s 2016 election manipulation than most people realize,” the report states. “And it could become a crucial Russian instrument next year.”
The reason for the slightly different focus is that many social media services have beefed up their defenses. Facebook, for example, now lets users search for information about who bought the ad, how much money they spent, and how much traction it got. And more recently, the company has started requiring advertisers to prove their identities and include contact information.
“They are better defended then they were in 2016 and more aware of the problem,” Paul Barrett, deputy director of the NYU Stern Center and author of the report, said about social media companies. But “it’s almost impossible for these platforms, given the way they’re set up, to defend themselves completely from meddling and disinformation that they’re going to face.”
Instagram, which is owned by Facebook, is an attractive target because the Internet Research Agency, the Russian troll farm behind the 2016 manipulation effort, had success there. It has received more user engagement on Instagram than any other social media service, according to a Senate Intelligence Committee report cited by the NYU Stern Center.
From 2015 through 2018, Instagram posts with IRA material received 187 million user engagements. That is more than double the engagement it got through Facebook or Twitter.
Also, Instagram is an “ideal venue for memes,” often a humorous video, image, or piece of text. And memes are common way people distribute fake quotes and disinformation, the NYU Stern Center report says.
Meanwhile, WhatsApp’s private, encrypted messaging creates an environment for the viral spread of disinformation without the ability to police it.
WhatsApp has already proven to be a “powerful vehicle” of distributing disinformation during presidential elections in Brazil and India, according to the report. During those elections, WhatsApp users forwarded false information, much of which was generated by political campaigns, to other users and groups.
The good news is that WhatsApp doesn’t have near the same usage in the U.S. as it does in Brazil or India, which have two and five times as many users respectively, according to the report.
Facebook said it is aware that adversaries are constantly changing their techniques. So much of the company's work—including tools and policies—across brands is focused on staying one step ahead.
“We also know that security is never finished and we can’t do this alone, so we are working with policymakers and outside experts to make sure we continue to improve,” said Tom Reynolds, Facebook spokesman.
Finally, deepfake videos, or videos altered by artificial intelligence to mimic a real person, could pose a major threat by making voters believe a candidate said something that he or she never did.
Experts are uncertain about when deepfakes will be believable enough to confuse the average viewer. Current deepfakes are still low quality and somewhat easy to spot. But most experts agree—it’s just a matter of time.
Social media companies are already trying to prepare for how they will police an onslaught of fake videos, a signal that the danger is real, Barrett said. Working against them is the fact that it doesn’t take a lot of effort to influence people with fake videos.
“You don’t necessarily need the most polished deep fake in the world to have an impact,” Barrett said.
Additionally, the report says Iran and China may join Russia in pushing disinformation. For-profit firms, both domestic and abroad, aiming to influence the election will also likely play a role.
The report also lists recommendations for social media companies that include limiting the reach of WhatsApp by only allowing users to forward content to one chat group at a time, defending against for-profit disinformation by preparing for an onslaught from these groups, and improving industry-wide collaboration on disinformation by sharing learnings about bad actors with other services.
“We’re moving toward the situation where people don’t know what to read and where they’re skeptical about everything they read,” Barrett said. “It could be a real problem.”
This story has been updated to include a statement from Facebook.
More must-read stories from Fortune:
—Android 10’s 7 most anticipated new features
—This new app puts deepfake technology in the hands of a mainstream audience
—Google hit with a record fine by the FTC for violating children’s privacy on YouTube
—A U.K court may have made police use of facial recognition easier
—Porsche unveils its first-ever electric car
Catch up with Data Sheet, Fortune's daily digest on the business of tech.