Here’s What’s Wrong With Algorithmic Filtering on Twitter by Mathew Ingram @FortuneMagazine February 8, 2016, 1:27 PM EST E-mail Tweet Facebook Linkedin Share icons Algorithmic filtering of some kind is coming to Twitter, it seems. According to CEO and co-founder Jack Dorsey, it isn’t rolling out this week, as some initially speculated, but it is almost certainly coming in some form, and soon. And while it likely won’t kill Twitter—despite what some hysterical Twitter users seemed to fear—it is not a magical solution to Twitter’s problems, and it does have some pretty clear downsides that are worth talking about. As the hashtag #RIPTwitter started trending following BuzzFeed’s initial report on Saturday, the corporate Twitter machine went into defensive mode: Dorsey responded with a series of tweets saying he was listening, and that Twitter values the traditional timeline, and noted Twitter investor Chris Sacca said that there was “zero chance” the chronological view would disappear. I *love* real-time. We love the live stream. It's us. And we're going to continue to refine it to make Twitter feel more, not less, live! — Jack (@jack) February 6, 2016 The argument from defenders of a filtered feed is two-fold: 1) Since many new users find Twitter confusing and it takes time to find accounts worth following, giving them an algorithmically-sorted feed (i.e., with tweets ranked by a computer program) is a good “on-boarding” strategy. And 2) Almost everyone who follows more than a handful of people misses plenty of tweets already, so sorting things via algorithm isn’t really much different, and probably better. Based on all of the commentary from Twitter executives, it seems likely that any algorithmic filtering of the timeline will be an option for users rather than the only way you can see your Twitter feed, as The Verge described in a post based on some user testing of the new feature. So is the fuss over filtering just another molehill that users are turning into a mountain? Is it the same as changing the star that represented favorites into an exploding heart—just another fuss that will blow over in time? Perhaps, although users of social services often come to accept many things that might not be good for them. Even the former CTO of Facebook, Adam D’Angelo, acknowledges that there are problems with a filtered feed. Prediction: everything you like about Twitter now will be the same and within 2 days you won't notice any change https://t.co/iqxiOQMyZo — Farhad Manjoo (@fmanjoo) February 6, 2016 If we need an example of both the benefits and the risks with a filtered feed—even one that is theoretically optional for users—we already have a pretty massive one, namely Facebook. Many supporters of Twitter’s move argue that Facebook users initially complained about filtering too, and then eventually went along with it, and engagement at the social network continued to soar. In other words, no big deal. It’s worth noting, however, that while Facebook allows users to opt out of algorithmic filtering, the opt-out setting is difficult to find (it also automatically resets itself to filtered after a certain period of time). As a result, most people don’t opt out because they don’t even know the option exists. In user design, defaults are everything. Evan Williams says Twitter is primarily a news network A survey by researchers from the University of Illinois showed that 60% of users didn’t even know that Facebook filters their feed at all. Some might wonder if that’s such a bad thing. Another former Facebook chief technology officer, Bret Taylor, noted on Twitter that an algorithmic feed “was always the thing people said they didn’t want but demonstrated they did via every conceivable metric.” So if users enjoy their experience, then who cares whether it’s filtered without their knowledge? Usage goes up, everyone is happy. Where’s the problem? Algorithmic feed was always the thing people said they didn't want but demonstrated they did via every conceivable metric. It's just better. — Bret Taylor (@btaylor) February 6, 2016 The problem with filtering is that the algorithm—which of course is programmed and tweaked by human beings, with all their unconscious biases and hidden agendas—is the one that decides what content you see and when. So ultimately it will decide whether you see photos of refugees on the beach in Turkey and shootings in Ferguson or ice-bucket videos and photos of puppies. Does that have real-world consequences? Of course it does, as sociologist Zeynep Tufekci has pointed out in a number of blog posts. It can serve to reinforce the “filter bubble” that human beings naturally form around themselves, and that can affect the way they see the world and thus the way they behave in that world. Whatever plays well to the algorithm will go further since more will see it. There is no getting around this feedback loop. — Zeynep Tufekci (@zeynep) February 6, 2016 Defenders of Twitter and Facebook make the point that newspapers and other forms of media do this kind of filtering and selection all the time. But they theoretically have a journalistic mission of some kind (in addition to just wanting to sell newspapers). Do Facebook or Twitter have a commitment to journalism, or accuracy, or any of the other goals media outlets have? Twitter at least has shown in the past that it cares about freedom of the press, and is willing to stand up in court and defend those principles. But how will that affect its filtering of your timeline? It has commercial and political considerations as well, since it is a for-profit company. Get Data Sheet, Fortune’s technology newsletter. Facebook, meanwhile, has argued that it doesn’t choose what to show you—that you, the user, do that by clicking and liking and sharing. The algorithm, Facebook says, is just a reflection of what you have already said you want. In other words, it has specifically rejected the idea that it plays any kind of editorial role in what users see. But this seems like dancing around the issue. By definition, algorithmic filtering means that you are not the one who is choosing what to see and not see. A program written by someone else is doing that. And while this may be helpful—because of the sheer volume of content out there—it comes with biases and risks, and we shouldn’t downplay them. As social platforms become a larger part of how we communicate, we need to confront them head on.