Facebook Tries to Bring More Transparency to Opaque Set of Guidelines

April 24, 2018, 9:00 AM UTC

Facebook has released more comprehensive “community standards”—the rules for what can and can’t be posted on its service—in an effort to provide more transparency at a time when many users and regulators are increasingly wary of the social media giant’s practices.

These new code of conduct, all 27 pages of it, isn’t actually new. But it is meant to provide more-in depth context and clarification of the rules currently in place. Up until now, the full set of guidelines was only accessible internally at Facebook. Users had access to only a condensed and more broadly-worded version of the rules.

“We want to provide lots of clarity around what we mean when we use terms like ‘bullying’ or ‘threat,'” Monika Bickert, vice president of the company’s global policy management team, told reporters about the company’s news. “It’s not always intuitive, and people in different parts of the world will often define these things very differently. People sitting in the same room will often define these things differently. So we are putting out there that we have very detailed definitions.”

Indeed, even detailed definitions can be tricky when it comes to matters of hate speech and other forms of offensive content. Here’s one example: Per the company’s newly-released community standards, Facebook doesn’t allow imagery that celebrates violence committed against people or animals. What does that mean?

According to the company’s now-publicly-available standards, this includes images with captions that contain any of the following: enjoyment of suffering, enjoyment of humiliation, erotic response to suffering, remarks that speak positively of the violence or remarks that indicate the poster is sharing footage for sensational viewing pleasure. But even with these exhaustive and specific examples, these categories can be hard to ascertain and prove particularly when dealing with more than 2 billion users and the mountains of content they post daily.

What looks like a violation to one person might not to another. Case in point: Animal abuse images are supposed to be permissible when they are aimed at drawing attention to the problem. But a few years ago, the organization People for the Ethical Treatment of Animals (PETA) complained that Facebook took down one of its more graphic photos of discarded carcass parts.

Another example of how specific—and often, subjective—Facebook’s distinctions between kosher and unkosher content can be: Videos depicting “visible innards” or “charred or burning people” are not okay except when shown in a “medical setting.” Facebook also makes exceptions for content that’s deemed “newsworthy,” often a moving and debatable target. In 2016, the company removed an iconic Vietnam War-era photo featuring a naked girl, but then reversed its decision after it was blasted for censorship.

And just finding questionable content—let alone making the right call once it’s flagged—is already a big enough challenge for Facebook. In addition to potentially offensive photos and videos, the social media platform is battling the proliferation of fake accounts, Russian trolls and false information, to name a few other culprits. Under pressure, the company has already put 10,000 people to work on beefing up its online safety and security (7,500 of those workers are human moderators who sift through content on the site). And it has said it plans to grow that number to 20,000 by end of 2018.

Even if not everyone agrees on where Facebook has chosen to draw its lines, the increased visibility into its rules will be welcomed by many. According to Bickert, another reason for publishing the more detailed standards is to spark a dialogue with Facebook users, and to include their voice in how the company evolves its rules of what is and isn’t permissible.

“We have found, over the years, that writing content policy is an iterative process,” Bickert said. “The community on Facebook is always changing and the way they communicate is changing. We necessarily have to keep up with the times, and we have to continue to iterate.”

This iterative process has been in place behind the scenes at Facebook for almost its entire 14-year history. But to be sure, the process itself has gone through many of its own iterations: Bickert, a former Assistant U.S. Attorney, joined the social media player in 2012 and has since hired many others with policy expertise. Also last week, the VP let a handful of reporters sit in on her team’s “Content Standards Forum,” a bi-weekly meeting that aims to evaluate and implement updates to current policies.

Close to 30 employees were on hand for this most recent discussion, including many from Bickert’s team and representatives from Facebook’s legal, community operations, communications, product, diversity, and government divisions. While the detailed subjects brought up in the forum were off-the-record, most of the discussion dealt with how to define and categorize certain types of offensive content.

According to Bickert, these meetings have been going on at Facebook for years, and were started before she joined the company. To provide more clarity on how policies are set at the company (outside of these review meetings), Facebook is now highlighting some of its policy-making processes in its just-issued set of standards. According to Tuesday’s announcement: “We [the content policy team] have people in 11 offices around the world, including subject matter experts on issues such as hate speech, child safety and terrorism. Many of us have worked on the issues of expression and safety long before coming to Facebook… Every week, our team seeks input from experts and organizations outside Facebook so we can better understand different perspectives on safety and expression, as well as the impact of our policies on different communities globally.”

In addition to providing more transparency about its overall process, Facebook also wants to expand its users’ ability to appeal certain decisions to remove or to permit questionable content. For the moment, though, the only avenue available to users is to start an appeals process if they believe their post has been wrongfully removed for reasons of nudity, hate speech or graphic violence. (Facebook says it will soon add other appeals categories in the near future.)

The increased transparency into Facebook’s decision-making and the expanded ability to appeal some of those decisions are all steps in the right direction for the embattled company. But like many of its recent moves, it will also likely spark even more questions, especially among its critics. One of those questions: Why didn’t the company release its full community standards earlier?

Read More

Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward