Twitter has spent much of its life promoting itself as a haven for free speech—the “free-speech wing of the free-speech party,” as a number of senior executives have described it. But that commitment is proving to be a lot more complicated than Twitter probably hoped it would be, as it tries to figure out how to cope with systemic harassment and abuse.
The latest flashpoint in this ongoing battle came on Tuesday, when the account of notorious troll Milo Yiannopolous—also known as Nero—was permanently banned by Twitter, following a torrent of racist and sexist abuse directed at comedian Leslie Jones that she says has forced her to leave the service completely.
Twitter (TWTR) hasn’t commented publicly about the banning of Yiannopolous, because the company has a policy of not commenting on specific user accounts. However, it did release a statement saying: “People should be able to express diverse opinions and beliefs on Twitter. But no one deserves to be subjected to targeted abuse online, and our rules prohibit inciting or engaging in the targeted abuse or harassment of others.”
Somewhere in between those two sentences is the line that Twitter is trying to find, and is being forced to draw: When does expressing an opinion, or engaging in an argument or debate, turn into orchestrated or targeted abuse and harassment?
At the heart of the problem is the fact that Twitter has spent so much time touting itself as a protecter and defender of free speech, unlike other more restrictive platforms such as Facebook (FB). Co-founder and former CEO (and current board member) Evan Williams and others have written a number of times about how “the tweets must flow” in response to demands for censorship from various governments.
Twitter has promoted the fact that it stands up to these demands and fights for the rights of users. But now it has to prove that it can find a way to defend the free-speech rights of some users, while protecting others from the harassment and abuse caused by that speech.
In the case of Yiannopolous, the technology editor for the right-wing news site Breitbart, the abuse directed at Jones appears to have been the last straw. He had his account suspended for similar kinds of behavior during the “GamerGate” controversy, and eventually lost his verified-user badge because he was seen as encouraging and participating in a campaign of abuse.
Get Data Sheet, Fortune’s technology newsletter.
Although his tweets have all disappeared now that his account is banned, screenshots show that Yiannopolous made disparaging and sexist comments about Jones, and also retweeted a fake image that purported to be a homophobic tweet from her.
It’s not clear whether the Breitbart editor also created the fake tweet, but it was likely enough to draw Twitter’s attention, since impersonating another user is a clear violation of Twitter’s rules, and he has done so before. There is also some evidence that Yiannopolous has deliberately orchestrated abuse as part of a campaign by racist users of 4chan and other Internet forums.
While he may be an obvious candidate for blocking or banning based on a pattern of abusive behavior, not everyone is satisfied that getting rid of trolls such as Yiannopolous—or conservative writer Chuck Johnson, who has also been permanently banned—shows that Twitter knows how to combat abuse.
The company has been criticized for some time by victims of harassment who say that it goes after prominent bad actors like Yiannopolous when enough noise is made about them, but doesn’t do enough to stop abuse by accounts that don’t have his public profile or get as much attention. The company admitted on Tuesday that there was some truth to those criticisms, saying:
We know many people believe we have not done enough to curb this type of behavior on Twitter. We agree. We are continuing to invest heavily in improving our tools and enforcement systems to better allow us to identify and take faster action on abuse as it’s happening and prevent repeat offenders.
Twitter went on to say that the company is in the process of reviewing its hateful conduct policy to “prohibit additional types of abusive behavior and allow more types of reporting, with the goal of reducing the burden on the person being targeted. We’ll provide more details on those changes in the coming weeks.” Twitter also recently said verification will now be open to any user, not just celebrities and the media.
Twitter might be a target for acquisition. Watch:
For free-speech advocates, including Jillian York of the Electronic Frontier Foundation, part of the problem is the idea of having a for-profit corporation deciding what is acceptable speech or behavior and what isn’t. Although they have the right to do whatever they wish to with their users, Twitter and Facebook have become such a dominant force in online discussion that banning people permanently has disturbing implications.
Because of the way it developed—with a focus on free and unfettered speech, and a tolerance for anonymity that Facebook and other services have never had—Twitter’s job is a lot harder. How can it crack down on systemic abuse without making it look as though its commitment to free speech was just a sham? Finding a balance between those two things may actually be impossible.
And even if it isn’t impossible, as Jillian York points out, it would take a massive investment of time and resources to develop a system that would stamp out such abuse among more than 300 million accounts posting billions of tweets every day. Does Twitter even have the ability to do that, given its money-losing status and a share price that is still near historic lows?