Is Holocaust Denial Free Speech? Facebook Needs to Be More Transparent

July 24, 2018, 5:14 PM UTC

Last week, representatives from major social media companies were brought before Congress to explain their approaches to removing online content. Specifically, some Republican members accused the companies of biased practices against conservative views. A day later, Facebook CEO Mark Zuckerberg noted that while the company wants to limit the spread of disinformation, Facebook does not want to be the arbiter of truth for every topic, including Holocaust denial.

Altogether, this has launched a significant and understandable debate about whether Facebook and other social media platforms are taking down too much or too little content, especially with respect to hate speech and false information.

One thing is clear: We need better systems in place to determine what stays up and what comes down on social media sites. In particular, tech companies should focus on three areas: transparency, a clear appeals process, and user-empowerment tools.

A major complicating factor in this public debate around online content is that we still don’t know very much about how companies make these content removal decisions. Recently, Facebook and Google provided some additional information about how they enforce their terms of service, but there is much more we need to know in order to understand how their policies shape our information environment.

That’s why a number of advocacy groups, academics, and researchers joined together to craft the Santa Clara Principles. These recommendations, based on foundational due process principles, address the importance of transparency and accountability when decisions are made to restrict users’ online speech. People need to clearly understand what the rules are, how they are enforced, and how to appeal when their speech is removed in error. And we need better data about the scope, scale, and broader consequences of platforms’ content moderation practices.

For example, it’s become a tech industry best practice to provide semi-annual reports on the number of demands a company receives from governments to hand over user data or remove content. These transparency reports shine a light on government censorship and surveillance and are crucial in helping journalists, researchers, and users hold both governments and companies accountable for practices that affect people’s rights to privacy and free speech.

Companies should do similarly comprehensive reporting about their terms of service enforcement and reveal the number of posts and accounts that are flagged and removed for violating their content policies. Without knowing the gap between what’s reported and what’s removed, it’s hard to understand and demonstrate the risks that over-broad takedowns or under-enforced policies pose for different speakers and communities online.

Companies should include context with their numbers, as well: a low rate of removal for user-flagged content in a certain category could reflect that users don’t understand the policy, that the company has a systemic failure in how it addresses the topic, or that a malicious actor is trying to game the system. Without information about the dynamics behind the numbers, statistics can be used to prop up anyone’s pet conspiracy theory.

Transparency is part of a larger set of tools we can use to hold companies accountable for the decisions they make about user speech. Appeal and remedy systems are another such tool. Users often aren’t told what rule they have allegedly violated when their posts are removed. Figuring out how to appeal a takedown or suspension is harder than it should be—if the option is even available.

After years of campaigning by advocates and users, Facebook recently announced that it would enable appeals for removals of individual posts in certain categories. While this is a welcome development, Facebook and other platforms still have a ways to go in providing users with a comprehensive ability to appeal decisions, including the opportunity to provide additional context and to receive a clear explanation of the final decision about their speech.

Content moderation on social media relies on systems that have not scaled up with the massive reach of these globe-spanning platforms. We may be seeing the limits of what the notice-and-review approach can handle: Facebook plans to hire 10,000 new content moderators by the end of the year, but that option is really only available to the giants of the Internet. Many platforms emphasize the growing role of automation in content moderation, but different forms of automation run their own risks. Filtering tools tend to be both over-broad and under-inclusive in what they catch and machine learning algorithms can “learn” the biases already present in society, leaving already marginalized speakers automatically cut out.

There are no easy answers to online content moderation at scale, but we can look to a key approach from the old-school Internet: user empowerment. Top-down enforcement of a single content policy is just one way to run a platform; others include involving members of the site’s community in administering and moderating subsections based on those sections’ own norms and policies, or allowing individual users to set their own filters and rules for what they can see and share on the site. These approaches, too, won’t solve every problem, but they can allow the platform to devote its resources to addressing the worst issues and most egregious conduct.

As we debate the appropriate role of platforms in responding to disinformation, hate speech, and propaganda, we must be cognizant of both the power they have and cautious of making them the final arbiters of who has a voice.

Emma Llansó is the director of the Free Expression Project at the Center for Democracy & Technology.

Subscribe to Well Adjusted, our newsletter full of simple strategies to work smarter and live better, from the Fortune Well team. Sign up today.

Read More

Great ResignationClimate ChangeLeadershipInflationUkraine Invasion