Google and Facebook’s Biggest Problem Isn’t Controlling Their Platforms. It’s Managing Expectations

December 20, 2018, 10:15 PM UTC
Google CEO Sundar Pichai testifies before the House Judiciary Committee on Dec. 11, 2018.
Alex Wong—Getty Images

Google CEO Sundar Pichai’s testimony before the House Judiciary Committee last week is just the latest example of a tech company having to respond to accusations of bias. While Pichai obviously spent much of his time defending Google from such allegations in search results on Google and YouTube, he isn’t alone. Platforms like Facebook, for instance, are being blamed of both “catering to conservatives,” and acting as a network of “incubators for far-left liberal ideologies.”

While accusing these companies of bias is easy, it’s also wrong.

As Rep. Zoe Lofgren (D-CA) correctly pointed out during Pichai’s testimony, “It’s not some little man sitting behind the curtain figuring out what [companies] are going to show the users.” Instead, these companies—and the people who work there—have been tasked with moderating content created by billions of users across the globe while also having to satisfy both the broader public and competing lawmakers who aren’t afraid to throw their weight around. Moreover, these companies are taking on this impossible task of moderating while also filtering content in a consistent and ideologically neutral way. And, for the most part, they are doing an admirable job.

Given the complexity and scale of the task, we shouldn’t be surprised that results vary. As Pichai noted, Google served over 3 trillion searches last year, and 15% of the searches Google sees per day have never been entered before on the platform. To do the math, that means that somewhere around 450 billion of the searches Google served last year were brand new inquiries.

Inevitably, many people will be left unsatisfied with how their preferred commentators and ideological views are returned in those searches, or moderated on other platforms. Mistakes will occur, trade-offs will be made, and there will always be claims that content moderation is driven by bias and animus.

Tech companies are attempting to achieve many different—sometimes conflicting—goals at once. They are working to limit nudity and violence, control fake news, prevent hate speech, and keep the internet safe for all. Such a laundry list makes success hard to define—and even harder to achieve. This is especially the case when these goals are pitted against the sacrosanct American principle of free speech, and a desire (if not a business necessity) to respect differing viewpoints.

When these values come into conflict, who decides what to moderate, and what to allow?

As it has expanded and welcomed in more than 2 billion users, Facebook has upped its content moderation game as well. The company now has a team of lawyers, policy professionals, and public relations experts in 11 offices across the globe tasked with crafting “community standards” that determine how to moderate content.

In recent months, Facebook has been more open about how these rules are developed and employed. This spring, Monika Bickert, the platform’s head of global policy management, wrote about Facebook’s three principles of safety, voice, and equity, and the “aim to apply these standards consistently and fairly to all communities and cultures.”

Can any standard be consistently applied to billions of posts made every single day in more than 100 different languages? Artificial intelligence and machine learning are very good at filtering out nudity, spam, fake accounts, and graphic violence. But for content that is dependent on context—which has always been the thornier issue—platforms must rely on human moderators to sort through each and every post that might violate its rules.

Putting aside the fact that they have not been able to satisfy those operating on either side of the political spectrum, Facebook and other platforms have taken their obligation to protect users seriously. After all, each faces a strong financial incentive to keep their users happy, and to avoid the appearance of favoring one set of political beliefs over another. Thus, creating neutral rules that can be consistently applied, regardless of political affiliation, is in a platform’s self-interest.

But when you look at how content moderation actually gets done, it’s clear that discretion by human beings plays a very large role. Facebook’s policies on what constitutes hate speech are written by human beings, and ultimately are enforced by human beings who—no matter how well-meaning they are—have different backgrounds, biases, and understandings of the subject matter. We shouldn’t be surprised when the results are inconsistent, messy, and end up leaving both conservatives and liberals unhappy. This doesn’t mean tech companies are politically biased—it means their job is incredibly difficult.

Christopher Koopman is the senior director of strategy and research and Megan Hansen is the research director for the Center for Growth and Opportunity at Utah State University.

Read More

Great ResignationClimate ChangeLeadershipInflationUkraine Invasion