A Reddit mascot at the company's headquarters in San Francisco.
Photograph by Robert Galbraith — Reuters
By Mathew Ingram
May 15, 2015

For the better part of the decade or so since it was founded, Reddit has thrived as a kind of quasi-anarchy: users can post pretty much anything and frequently do, including copyright violations and all manner of offensive imagery and commentary. Those users can even become moderators—which gives them a huge amount of power over the content that is posted—without asking permission from anyone. This freedom has made the site one of the most popular online communities around (it had 71 billion pageviews in 2014), but it doesn’t make it a terribly good business. Reddit’s big challenge is to somehow find a way to combine both freedom and the need to control the content its users post. But is that even possible?

On Thursday, the company’s executive team—including co-founder and chairman, Alexis Ohanian, CEO Ellen Pao and Jessica Moreno, the site’s head of community—announced that they are taking a firm stand against the harassment and negative behavior that often occurs on the site, especially involving women. In a blog post, they said that while the company values “privacy, freedom of expression and open discussion,” a recent survey of users shows that the bad behavior by a few is driving people away: the number one reason existing users don’t recommend the site to friends is they want to avoid exposing them to hate and offensive content.

“We’ve always encouraged freedom of expression by having a mostly hands-off approach to content shared on our site, [but] instead of promoting free expression of ideas, we are seeing our open policies stifling free expression; people avoid participating for fear of their personal and family safety.”

It should be noted that the new impulse to control this kind of behavior doesn’t just come from a realization that it is affecting users, but because Reddit needs to start thinking like a business instead of an online commune. The company raised $50 million in venture-capital financing last year, from a group that included leading Silicon Valley VC Andreessen Horowitz, and that brings with it the expectation that Reddit will do things like make money. Copyright violations and rampant sexual harassment aren’t much help when that’s your goal, and advertising is your primary means of revenue generation.

Reddit has already made some changes over the past year in an attempt to cut down on avenues for harassment, including new policies related to “revenge porn,” a specific subset of Internet behavior in which ex-boyfriends and spouses post naked photos of their former wives and girlfriends. The site also took down an entire forum or “sub-Reddit” devoted to pictures of famous actresses and other female celebrities that were stolen from their iCloud accounts and uploaded en masse to a forum called The Fappening.

In the past, Reddit has allowed similar content to survive on the site because of its commitment to free speech and in particular the value of anonymity—something Ohanian has spoken about a number of times. Last year, he talked about why he invested in Secret, one of a number of anonymous apps, saying: “Like all tools, this new publishing technology comes down to how we as individuals use it, but I’m heartened by every post I see that allows someone to share something about themselves that they’d never have been able to with their name attached… anonymity enables us to be truly honest, creative, and open.”

Ohanian and others at Reddit have also talked about how only a small number of users engage in the kind of harassment and bad behavior they want to squash—the blog post says the new rules “will have no immediately noticeable impact on more than 99.99% of our users.” But how can the site find and remove or block these bad actors if the vast majority of Reddit accounts are anonymous? And how is it going to define harassment or bad behavior so that it protects users but doesn’t impact free speech? In their post, the executive team say they will define harassment as:

Systematic and/or continued actions to torment or demean someone in a way that would make a reasonable person (1) conclude that reddit is not a safe platform to express their ideas or participate in the conversation, or (2) fear for their safety or the safety of those around them.

They also say that they want to “prevent attacks against people, not ideas.” In other words, they want to exclude ad hominem attacks that are aimed at individuals, and subject them to threats or other harassing behavior, but they don’t want to stop people from challenging or attacking ideas. This is a noble statement of purpose, but it’s a lot harder to do than it is to say, as a number of Reddit users have pointed out. How does the site define what constitutes torment, or conclude that something is demeaning? Do the users in question determine that, or do Reddit administrators?

Online communities like Reddit are a complicated and fragile ecosystem—trying to change one aspect of them can have a host of unintended consequences, as Reddit’s predecessor Digg found out to its detriment. In trying to attain a worthwhile goal of protecting users, the site could wind up trampling on the kind of free-wheeling approach that helped it become a huge online community in the first place. In other words, it might become a business, but only by losing its soul. Would the trade-off be worth it?

SPONSORED FINANCIAL CONTENT

You May Like

EDIT POST