Facebook, Google, and Twitter Could’ve Prevented the Russian Ads. Why Didn’t They?
Representatives from Facebook, Twitter, and Google are set to face hostile questions from Congress about how Russia used their platforms to influence U.S. politics. The problem of unfriendly foreign powers spreading propaganda is not new. It has happened for centuries.
What is new, however, is the reach of the Internet and modern social networks and the speed with which they can launch and propagate misinformation campaigns. This duo has made it easy to spread misinformation and target specific communities with detail that was previously unimaginable.
Tech companies have denied this is happening while happily raking in huge profits from the misuse of their platforms. Only in the past week did Twitter agree to ban ads from state-sponsored Russian news sites, years after government intelligence agencies and media critics called into question whether those sites were publishing outright lies.
The tech giants could have solved these problems years ago with simple technology fixes and filters. This is not rocket science and does not erode free speech. For example, blocking ads published by foreign news agencies targeted to small parts of key election states is a no-brainer; why would RT need to buy ads in a few zip codes in western Pennsylvania for any other reason than to circulate propaganda? And if a new social media account has no websites and does not list physical offices in the U.S. or even in a foreign country, why are you letting it purchase advertising?
How did we get here?
The stars have actually been aligned for a few years. Social media has become ubiquitous. The Pew Research Center finds that 67% of all Americans get their news from social media at least some of the time. Social and search platforms have won our trust, as well. People actually trust Google more than online media sites, according to a 2016 survey by Edelman. And fake news outperforms real news, according to an analysis by BuzzFeed.
There are actually two facets of this problem. One is the political ads, and these are relatively easy to tackle. The tech giants should meet the same standards that media companies are held to, and embrace disclosure. This is the price of being a good corporate citizen. If television networks and radio stations can handle the requirements to disclose purchases, so can the techies. Yes, online ads are automated and poring through multiple layers is not easy. Third-party ad serving systems also make establishing ownership harder. But tying a responsible party to each ad is just another data field that Google, Twitter, and Facebook can require. They can verify this just as they verify the bank or payment accounts of companies they do business with.
And then there is the veracity and intention of the content. This is harder to judge and police, but not impossible. Fake or questionable content has a fingerprint. Google, Facebook, and other tech companies have algorithms to rate which content is highly ranked. It’s not terribly difficult to add criteria to determine credibility. This could be a number score based on a dozen or so factors: age of site, DNS records showing internet provenance, where links to the site come from, or if the site been linked to by known fake Twitter bots. Or perhaps Facebook and Twitter could ask why a no-name fake news platform is geo-targeting ads only to Wisconsin. With this number, any link that scores below a certain threshold can be reviewed more closely by human editors before it is highlighted.
Some might perceive this as censorship. But it doesn’t mean that low-ranked content sites that are legitimate would be blocked permanently. They might just be asked to provide more details in order to get whitelisted. And, in fact, this is not a much different than what the media industry does today. All outlets exercise the right to decide what they allow on their platform. We are not talking about stopping fringe groups from putting up websites. Rather we’re focusing on identifying them and blocking them from hijacking social media.
There is another issue that the tech giants don’t want to face up to: They already know that a significant proportion of their accounts are trolls and bots—and they can easily delete these accounts. One study estimated that up to 15% of Twitter accounts are bots, not people. If university researchers can figure this out, so can the data scientists at Twitter. But reduced user numbers and engagement rates would hurt Twitter’s stock price, so it does not want to acknowledge this reality.
This is a matter of profits, not rocket science. When it comes to making money, tech companies can always figure out how to solve the most complex of problems. No more excuses. The future of our democracy is at stake.
Vivek Wadhwa is a distinguished fellow at Carnegie Mellon University’s College of Engineering and Alex Salkever is an author, public speaker, and former vice president of marketing at Mozilla. Together they authored The Driver in the Driverless Car: How Our Technology Choices Will Create the Future.