Commentary: The War Against Bad Bots Is Coming. Are We Ready?

February 26, 2018, 8:55 PM UTC

Fake news. Fake social media accounts. Fake online poll takers. Fake ticket buyers. And behind them all: The prolific fakery of botnets.

When will we get real and stop them?

Malicious bots account for nearly 20% of all Internet traffic. These robotic computer scripts have been responsible for stealing content from commercial websites, shutting down websites, swaying advertising metrics, spamming forums, and snatching away Hamilton tickets for exorbitant resale.

But revelations about Russian bots meddling in the U.S. election and a scorching New York Times investigation into the selling of fake Twitter followers and retweets vividly illustrate that the bot epidemic is even more severe than most people realized.

And yet the bots march on, aided by a double whammy: murky laws governing their creation and sale, and social media companies that have too often turned a blind eye to the veracity of their reported user numbers.

Tightening our defenses against malicious bots won’t be easy, but recent events show that the effort is warranted. Bots should be considered nothing less than a public enemy.

Bots infiltrate social media

Not long ago, bots were mainly thought of as an IT or somewhat esoteric business problem—the main culprits behind web scraping, brute force attacks, competitive data mining, account hijacking, unauthorized vulnerability scans, spam, and click fraud.

But the use of bots to manipulate elections and political discussion via the major social media platforms is a new and unnerving trend.

In October, members of Congress hauled executives from Facebook, Twitter, and Google into a hearing to explain Russian interference via their platforms in the 2016 presidential campaign. The executives promised to do better. And yet in late January, top congressional Democrats called on Facebook and Twitter to analyze the role of Russian bots in the online campaign to release a memo containing classified information about the federal investigation into Russia’s meddling.

On Feb. 16, Special Counsel Robert Mueller filed an indictment accusing 13 Russians of running a bot farm and disinformation operation that spread pro-Donald Trump propaganda on social media.

Bots are more prevalent on Twitter than many realize. While Twitter testified before Congress that about 5% of its accounts are run by bots, some studies have shown that number to be as high as 15%. In November, Facebook told shareholders that around 60 million, or 2%, of its average monthly users may be fake accounts.

Social media companies—just like online publishers—have a vested interest in letting bots exist on their platforms because monthly active users are one of their main measurements of success. Accounts, human or not, are accounts.

Stopping the madness

Social media companies’ disingenuous Captain Renault act—he was the character in Casablanca who declared, “I’m shocked, shocked, to find that gambling is going on here”—must stop. With its ability to influence opinions, social media does remarkable harm by playing a role in the rigging of elections and public debate. So social media companies must step up and more aggressively self-police.

We know they can do it. Look at how more than a million followers disappeared from the accounts of dozens of prominent Twitter users right after the New York Times investigation was published. I doubt this was a coincidence.

Twitter should consider extending its “verified” program—that blue badge that lets people know an account of public interest is authentic—to all human users. This would be a huge technological undertaking—after all, bots are so hard to prevent because they act as a legitimate user would—but the same artificial intelligence technologies that allow bots to emulate humans could be used to verify humans.

The government’s role

Meanwhile, government needs to join the fight against bad bots. This won’t be easy, as bot promulgators are anonymous and it’s difficult to legislate against those you can’t identify.

The bot problem didn’t prompt its first piece of federal legislation until September 2016, when Congress passed the anti-ticket scalping Better Online Ticket Sales (BOTS) Act. Interestingly, the ticket problem persists despite the law, in part because the Federal Trade Commission has done little to enforce it.

A good next move for Congress would be to launch a long-overdue update of the Computer Fraud and Abuse Act from 1986, which makes it unlawful to break into a computer to access or alter information and, astoundingly, still serves as a legal guidepost today. U.S. law needs better definition of what’s allowed and what’s not.

States can play a role too, as evidenced by New York Attorney General Eric Schneiderman’s laudable decision to investigate Devumi, the company selling fake social media followers and the subject of the New York Times investigation.

Enough is enough

Finally, we as consumers should say we’re tired of these shenanigans. Now, to be fair, there are two victims: the social media companies and the users. Twitter’s founders didn’t create its platform expecting it to be under attack from the Russians; they wanted people to communicate. Users didn’t expect their profiles to be stolen and their accounts to be abused. Nevertheless, we can demand that social media platforms be more transparent—or else we won’t use them.

It’s high time to recognize that bad bots are a serious threat and start addressing the problem head-on. The fakery can’t be allowed to continue, or we all suffer.

Rami Essaid is co-founder and chairman of Distil Networks, a bot detection and mitigation company.

Read More

Great ResignationClimate ChangeLeadershipInflationUkraine Invasion