The posts were based on an urban myth. The Rome Statute in fact covers crimes against humanity, not Facebook’s relationship with its users. But the popularity of such posts remind us that the #DeleteFacebook movement we have seen in recent days has much deeper roots.
Every now and then on Facebook’s long journey to platform hegemony, a much-hyped “Facebook Killer” has come along—a new platform that might provide a viable alternative, built on better principles. First there was “Diaspora,” which began as a viral crowd-funding campaign to create a non-profit, user-owned, decentralized alternative to Facebook. Mark Zuckerberg even chipped in—a solid indication that he regarded Diaspora as little more than a charming experiment. Then, in 2016, the social network Ello came to life with a powerful sales pitch: “Facebook’s real mission isn’t to make you happy. It isn’t to connect you with old friends, or to facilitate interesting conversations. You (the humble Facebook user) are not the customer … The people who buy [your] data are the real customers, and that’s to whom Facebook’s business is oriented.” This promise was seductive enough that at one point, its founder claimed that 31,000 people an hour were signing up to the new platform. (Though in time, these users fell away, with the clunky product not living up to the sexy hype.)
These are all preludes to the current moment. What we are seeing, in light of the Cambridge Analytica scandal, is the emergence of a much broader political consciousness in regards to the platform. This opportunity ahead is for all of us users not simply to “win” occasional concessions from Facebook, but to start to re-imagine the social (and financial and political) contract that we have entered into with the platform.
Far from the organic, free-roaming paradise the early Internet pioneers imagined, there is a growing sense that we are now living in a world of participation farms, where a small number of big platforms have fenced, and harvest for their own gain, the daily activities of billions. This has a real price. In a Guardian poll in 2017, less than one-third of Americans agreed that Facebook was good for the world, and a paltry 26% believed that Facebook cared about its users.
So the big question ahead is: How might things be structurally different? How might we create models that don’t merely offer the vanilla “power to share” that Facebook touts, but actually share value and governance decisions, along with delivering greater transparency and freedom?
The promise of “platform co-ops”
The University of Colorado Boulder’s Nathan Schneider is one of the leaders of a growing (but still quite academic) movement that is championing what he calls “platform co-ops,” democratically run and governed cooperatives reimagined for a world of peer-based technology platforms, not just farms and factories. This movement wants to turn the participation farm into something that looks more like a digital kibbutz.
Schneider points to models in more traditional industries that run along these lines, like the U.K. department store chain John Lewis, which in 1929 was put into a trust owned by its employees, who share in the retail chain’s profits and elect representatives to its governing board. For new power platforms, he argues, it is their millions of users, not just those on the payroll, who should share in the value created, have a say in big decisions, and be represented in the governance of these platforms. He has proposed that Twitter’s (TWTR) users try to buy it back, arguing that it serves an essential public function. In his mind, for all its challenges, the problem isn’t that Twitter isn’t working for its users—he cites the powerful justice movements that rely on it as evidence that it is. The problem is that “Wall Street’s economy has become Twitter’s economy.”
The #BuyTwitter movement championed by Schneider and others was significant enough that it ended up as one of the five proposals on the table at Twitter’s 2017 annual general meeting. It made a strong case for a Twitter that would function very differently.
A community-owned Twitter could result in new and reliable revenue streams, since we, as users, could buy in as co-owners, with a stake in the platform’s success. Without the short-term pressure of the stock markets, we can realize Twitter’s potential value, which the current business model has struggled to do for many years. We could set more transparent, accountable rules for handling abuse. We could re-open the platform’s data to spur innovation. Overall, we’d all be invested in Twitter’s success and sustainability. Such a conversion could also ensure a fairer return for the company’s existing investors than other options.
This motion was not embraced by Twitter Inc. And it would be very tough to flip Twitter into a co-op in this way. But it points to a compelling alternative vision, for us as participants and for the next generation of platform creators.
In fact, companies with this cooperative-inspired philosophy and model are beginning to emerge. The photo-sharing co-op Stocksy brings together photographers and filmmakers, giving them an opportunity to license their work. They are a proud platform co-op but also a serious and growing multimillion-dollar business. As they put it: “(Think more artist respect and support, less patchouli.) We believe in creative integrity, fair profit sharing, and co-ownership, with every voice being heard.”
For platform co-ops and similar ideas to succeed, governments will have to make it easier to raise money to scale without relying on big investors or the traditional capital markets. There is also a real engineering challenge in ensuring that these types of models move beyond the artisanal to the mainstream. But if the technical piece can be cracked, it’s not hard to see the moral (and financial) appeal to users and content creators. Especially given Facebook’s increasingly strained relationship with its users, a rival that offered a similar user experience but a much fairer deal, might easily attract defectors.
A decade ago, none other than the father of the World Wide Web, Tim Berners-Lee, saw the dangers of participation farms like Facebook looming.
In 2008, almost 20 years after laying out his original vision, he rallied for the building of “decentralized social networks” that would reclaim his beloved web from increasingly centralizing sites. He saw a big prize in a more fluid and pluralistic world of platforms in which “online social networking will be more immune to censorship, monopoly, regulation, and other exercise of central authority.”
Today, he is hard at work on a project to address that very issue, a plan to radically alter the way web applications work, one that would divorce all our personal data and content from the apps and platforms that now—often literally—own it. Berners-Lee’s Solid project would allow us to own our own data as part of a personal secure “pod” in which we would carry around our digital lives. So imagine that, rather than having all your data on a third-party platform, you now take it with you. (This is what geeks call “interoperability.”) You walk around with your photos, friends, health histories, a map of all the places you have traveled, a list of all your purchases—even the online reputation you have built up in various platforms, an especially powerful commodity. You are liberated to decide what access you would like to grant—and on what terms—to whom. Solid is much more than a different kind of technology; it is a different philosophy. With Solid, your data “reports to you.”
Another solution to the same problem comes in the great—and much-hyped—hope of the Blockchain. The Blockchain is a distributed public ledger that allows everyone to record and see what transactions have taken place. Unlike a centralized secret ledger—such as those of banks—it is transparent. And transactions are verified not by a central force, but as a distributed process. You might know the Blockchain from its most famous (and controversial) application to date: It is the underlying technology upon which the virtual currency Bitcoin is built on.
For non-technologists—even those who have spent hours trying to get their heads around this—the way this actually works can be hard to grasp. But the most important things to understand are the potential human applications. As The Economist puts it, “It offers a way for people who do not know or trust each other to create a record of who owns what that will compel the assent of everyone concerned. It is a way of making and preserving truths.”
The potential of this is as huge as the hype (although, like all technologies, Blockchain remains vulnerable to co-optation and capture). It opens up a world where users might exchange value directly without an extractive middleman. We can easily imagine real estate contracts or financial transactions living on the Blockchain. But we might imagine, too, the intermediaries being removed from the mega-platforms of the world—our Facebooks, Ubers, or Airbnbs—when users, drivers, and riders, or hosts and guests, work out ways to collaborate and exchange directly with each other.
As we look to the future, there is no shortage of predictions about the next participatory technologies and ideas that will transform our lives. Whether it be virtual reality, augmented reality, or blockchains, platforms as we know them today will likely end up feeling rather quaint. But however things turn out, we need to cling to, and build for, a set of principles that ensure the worlds we will live in are less monopolistic, more transparent, and much more attuned to their broader impact.
A public-interest algorithm
To truly reimagine a platform like Facebook, we need to reimagine its algorithm. As Facebook has shown us, social media sites have huge power to alter our consumer preferences, spur or hinder extremism, and sway our emotions with tweaks of code. But today their algorithms function as secret recipes that serve private interests.
So let’s consider how a “public interest algorithm” might work instead. What might a formula designed to favor the interests of platform participants and society at large—instead of just their owners, advertisers, and investors—look like?
It would need three key features. First, the inputs into the algorithm, which shape what content we see and what gets priority, would be fully transparent to the user, including the criteria used by the platform to moderate offensive content or hate speech. Second, every user would have a range of dials that allowed them to alter their world. They could choose to engage with more content they disagreed with. They could “filter in” perspectives and views from those well outside their bubbles. They could reduce sensationalism. Third, the default settings of the algorithm would apply a public interest test, considering how the platform can better serve our broader society. This might operate like an updated version of public broadcasting, bringing to the surface content proven to reduce social tension and extremism and bolster civic discourse, promoting pluralism, and showcasing unserved and underserved communities. This would not come without its challenges—legitimate debates would need to be had around whether, how, and how much a platform should “tip the scales” in this way—but it is a greater moral challenge than an engineering one.
To advance solutions like these, we, as the participants on the participation farms, need to do more than just lament our fates. It will take inspired and dedicated effort—by technologists, entrepreneurs, the platforms themselves, and all of us—to renegotiate the contract with which we participate.
#DeleteFacebook is just the start. We need to go deeper. In the coming months, we may see #ReformFacebook, #RegulateFacebook, and even #ReplaceFacebook take off.
Henry Timms is president and CEO of 92nd Street Y. Jeremy Heimans is the co-founder and CEO of Purpose. This article was adapted from the book NEW POWER by Jeremy Heimans and Henry Timms.