Do you have a college degree, analytical skills, and an eye for questionable content? If so, you might have what it takes to become a “News Feed integrity data specialist” for the largest social network in the world: Facebook.
The fancy job title is industry-speak for workers who sift through and flag articles, videos, and other posts that could be in violation of the company’s code of conduct—a professional role that’s in growing demand. Indeed, in response to concerns over a proliferation of fake news, abuse, and Russian-backed political ads, the social media giant has already put 10,000 people to work on beefing up its safety and security (7,500 of them are so-called human moderators), and says it will grow that number to 20,000 by the end of 2018. Alphabet’s YouTube division, which has also faced criticism for allowing violent and offensive content to thrive on its platform, is ramping up its humanoid workforce too, with plans to hire more than 10,000 people this year. And Twitter, under increasing pressure over the spread of bot-controlled accounts and other “bad actors” on its app, has announced it will use moderators in an effort to become a “safer” place for real users, though it won’t disclose how many people it has hired or plans to hire.
“We have invested significant resources in both human content monitoring and machine learning to fight abuse on Twitter,” a company spokeswoman told Fortune in an email.
The efforts to bring in actual people are necessary at the moment, because, researchers say, algorithms still have a hard time telling the difference between a clip of someone eating a chicken sandwich and a video of someone performing inappropriate acts on one. (A real example, unfortunately.)
But humans may not be a sustainable solution. The 10,000 additional moderators and other workers Facebook says it hopes to deploy to combat abuse on its platform is a pricey supplement to its existing 25,105 total employees, even if some of those new hires are contractors. Adding this many people hardly fits into tech company business models. (And yes, these days Facebook has even bigger problems with its business model: See Cambridge Analytica.) What’s more, the sheer amount of content on these platforms makes it nearly impossible for even thousands of new employees to make a significant dent in any cleanup efforts.
But there is a technological alternative, and it’s already happening. Even as YouTube’s CEO, Susan Wojcicki, announced the expansion of her company’s fleet of human moderators last December, she also pointed out the A.I. on the job. “Since we started using machine learning to flag violent and extremist content in June [2017], the technology has reviewed and flagged content that would have taken 180,000 people working 40 hours a week to assess,” Wojcicki wrote in a company blog post late last year. Artificial intelligence is not only faster than people, it is also immune from emotional wear and tear. In mid-March, Wojcicki made another announcement: YouTube had started limiting the amount of time its content moderators can sift through videos to four hours per day, in order to protect their mental health. The move, while necessary, makes it even more unlikely that employees will be able to significantly clean up content on sites like YouTube.
If history is any indication, the efficiency of technological tools will likely prevail over the more nuanced abilities of humans. In the early days of search engines, for example, companies like Yahoo relied on human curation, employing thousands to neatly categorize and organize the fast-growing content on the web. But before long, Google-built algorithms were shown to be vastly superior, and the rest is history. (Though to be sure, there were other reasons that Yahoo didn’t last as a search engine.)
Just because Facebook is hiring thousands, and CEO Mark Zuckerberg has said he is “dead serious” about ridding his platform of problems like Russian meddling in our elections, that doesn’t mean he sees an army of human moderators as a workable—or long-term—solution. More likely, he and the rest of the industry see humans as a temporary patch, not to mention a public relations move, intended to placate those who are skeptical of fighting bad technology with even more technology.
Cynical? Maybe, but consider this: The world’s largest social network has 2.13 billion monthly active users. Even with 20,000 safety and security specialists, that’s still just about one person per every 100,000 accounts plus all of the videos and messages published by those registered users, whether real or bot.
That means that if there’s any hope for solving the massive spread of misinformation, hate speech, and violent content (to name just a few offenders) readily available on all of these popular platforms, it won’t be throngs of poor souls struggling through hours of gratuitous posts. Rather it will be a few people working alongside technology that has yet to be developed. Let’s hope it gets there soon.