When do Twitter block lists start infringing on free speech? by Mathew Ingram @FortuneMagazine June 12, 2015, 12:54 PM EDT E-mail Tweet Facebook Google Plus Linkedin Share icons If the early days of the social web were all about increasing the amount of freedom that people had to express themselves, more recent history seems to be about moderating or restricting or circumscribing that freedom—whether it’s Facebook censoring things, or Reddit banning offensive sub-groups like r/fatpeoplehate, or Twitter making it easier for people to share lists of users they have blocked for abuse. On the one hand, these kinds of measures are clearly necessary. Many users of these services—and particularly women, visible minorities, transgender people, etc.—are routinely harassed and targeted for abuse, as we’ve seen with issues like GamerGate, and platforms like Twitter TWTR have been criticized for not doing enough to protect them. Block lists are clearly an attempt to make up for that. By adding this feature, Twitter is taking a cue from popular third-party services like BlockTogether, which was created by a former engineer at Twitter. There are even automated account filters or bots that allow users to outsource their blocking. Anyone who makes it onto the list simply never appears in a user’s timeline—in terms of the Twitter network, it’s as though they never existed. Just as some Reddit users have complained that their free-speech rights are being infringed by the site’s blocking and banning, there are those who argue that Twitter is also encouraging such behavior with its shared block lists. And while it’s true that the First Amendment only applies to governments, not individuals or corporations, there is a point at which it’s reasonable to get concerned about the impact on speech. “it’s kind of a fraught conversation, no question,” says Jillian York, the director for international freedom of expression for the Electronic Frontier Foundation. “It’s one thing if a block list is public, so it’s transparent, but there are private block lists as well and that’s a little disturbing. I’ve already seen a couple of journalists using them to block people who aren’t even really harassing.” The risk, York says, is that these kinds of crowdsourced lists could be used in political ways, as a way to “silence specific groups.” In a sense, they could become a new kind of blacklist, in the same way that lists of suspected Communists were used to deny certain people work in Hollywood in the 1940s and ’50s. As a number of critics of block lists have pointed out, people can have their names added to a list by mistake, or due to an overzealous interpretation of what constitutes abuse or harassment. Since those lists can then be shared and used to fuel automated block bots and other services, appearing on such a list could cause a cascade effect, where that person winds up being tarred unfairly with accusations of abusive behavior. In at least one case, a prominent feminist who herself had been the victim of a sustained Twitter abuse campaign—British writer Caroline Criado-Perez, whose case was used by many to argue that Twitter’s anti-harassment tools were inadequate—was added to a block list shared by a wide group of users. And as is the case with other crowd-powered tools such as Wikipedia, there isn’t really any means of appeal. Defenders of the shared block-list approach such as veteran blogger and ThinkUp co-founder Anil Dash argue that no one’s free-speech rights are being infringed in such cases for several reasons, including the fact that private platforms such as Twitter aren’t covered by the First Amendment. They also argue that even if we are committed to free speech as a principle, that doesn’t give anyone a right to be heard. Critics like MIT Technology Review editor Jason Pontin, however, argue that the whole point of supporting free speech is that we should do so even when those who are speaking are saying something horrible, otherwise the principle is meaningless. As Voltaire’s biographer Evelyn Beatrice Hall put it: “I disapprove of what you say, but I will defend to the death your right to say it.” The problem, of course, is that tools like Twitter—massive, distributed networks that allow an unprecedented variety of speech from individuals around the world—have never really existed before, and so we don’t know exactly what to do with them or how we should behave. Is Twitter like the village square, as Dick Costolo has said? It sort of is, but then it’s also owned by a corporate entity, which has shareholders and fiduciary duties. And the problem of harassment is a very real one. One good thing about crowdsourced or shared block lists, York says, is that it is better than Twitter choosing who should be blocked and who shouldn’t. “In general, I think these tools are much better than if we just let the company decide. That would be terrible. At least this is real community policing.” If nothing else, it’s clear that being the “free-speech wing of the free-speech party”—as Twitter has famously described itself in the past—is getting harder and harder.