One of the most significant trends currently reshaping the media industry is the shift away from traditional channels -- i.e., newspapers, magazines, TV networks -- to distributed platforms such as Facebook. Apple has made a move into this new landscape as well, with a News app that is supposed to provide a curated content experience. But can we trust these new gatekeepers?
Facebook (fb) is farther down this road than Apple, with its FB Newswire and its expanding Instant Articles project, and most of what it does relies on algorithms (although human curators are involved in the Newswire). But Apple's approach is going to rely almost entirely on editors, which it is in the process of hiring. So will Apple (aapl) apply the same kind of principles to news stories that it does to approving apps?
Consider what just happened to an app called Metadata+: After initially being rejected by Apple earlier this year, it was accepted into the app store -- only to be suddenly removed this week because of "objectionable content." And what does the app do? It sends you an alert every time someone is killed by a U.S. drone strike. It was created by Josh Begley, a researcher for the investigative news site The Intercept, and uses data from the Bureau of Investigative Journalism.
According to Apple, this is "too narrow in focus." And yet, as more than one critic has pointed out, Apple has no problem at all with apps that consist solely of a button that makes fart noises. Is it because Metadata's focus is on politically sensitive information about the U.S. military targeting citizens for extra-judicial executions? Or is it simply that drones killing people is unpleasant?
Regardless, it raises the question of what Apple's editorial team will do with news stories that are disturbing, or politically sensitive. Will they choose not to include them in the curated product that readers see when they open the News app?
Apple is entitled to do whatever it wants with apps that exist on its platform, obviously, and to include or not include whatever news it wants in its news product. Technically, removing certain kinds of information isn't censorship because there is no right to free speech on someone else's platform -- but that doesn't mean it isn't cause for concern, just as it is when Twitter or Facebook remove something.
Facebook, which has been trying hard to become a friend to media companies and news outlets, also routinely removes things that are disturbing, presumably because it wants to maintain a friendly and happy experience for users. But in some cases that means removing things that are newsworthy, such as photos and videos of violence in Syria that have news value, or pictures of dead refugees.
Journalism professor Jay Rosen raised a similar question about Twitter in a recent post, in which he talks about the curation feature the network is working on, called Project Lightning. What can we expect from Twitter's editors? Will they show us everything? At least Twitter has talked about being the "free-speech wing of the free-speech party," although that was under a previous administration.
As more and more of the news we consume comes through platforms like Facebook, Twitter and Apple, it's fair to ask what commitment (if any) these new gatekeepers have to journalistic principles, or to the truth, or even to the news. Or is news just another app that's designed to keep users happy?