What Governments Can Learn From Airbnb And the Sharing Economy
Arun Sundararajan is a professor of business at New York University. This excerpt is from his new book, The Sharing Economy, which published in June 2016.
A potential guest recently sued online home rental service Airbnb over allegations of racial discrimination by some of its hosts. While the company has since reiterated its commitment to rooting out racial bias, including hiring a former ACLU head to oversee its efforts, the case highlights a broader set of societal challenges we will face as the sharing economy expands into new services..
In my new book, The Sharing Economy, I discuss how platforms like Airbnb and Lyft blur the lines between our personal and professional lives, creating gray areas for business owners, consumers and governments. It is clear that these digital platforms are not inherently biased, but rather services that have given transparency to undesirable human biases that manifest when people make choices about the usage of spaces they consider personal. This calls for new regulatory solutions to old problems.
For now, the sharing economy is a highly unregulated part of business. Eventually, that could change. Despite some regulators’ fears, the sharing economy may not result in the decline of regulation but rather in its opposite, providing a basis upon which society can develop more rational, ethical, and participatory models of regulation. But what regulation looks like, as well as who actually creates and enforce the regulation, is also bound to change.
There are three emerging models – peer regulation, self-regulatory organizations, and data-driven delegation – that promise a regulatory future for the sharing economy best aligned with society’s interests. In the adapted book excerpt that follows, I explain how the third of these approaches, of delegating enforcement of regulations to companies that store critical data on consumers, can help mitigate some of the biases Airbnb guests may face, and why this is a superior alternative to the “open data” approach of transferring consumer information to cities and state regulators.
Consider a different problem — of collecting hotel occupancy taxes from hundreds of thousands of Airbnb hosts rather than from a handful of corporate hotel chains. The delegation of tax collection to Airbnb, something a growing number of cities are experimenting with, has a number of advantages. It is likely to yield higher tax revenues and greater compliance than a system where hosts are required to register directly with the government, which is something occasional hosts seem reluctant to do. It also sidesteps privacy concerns resulting from mandates that digital platforms like Airbnb turn over detailed user data to the government. There is also significant opportunity for the platform to build credibility as it starts to take on quasi governmental roles like this.
There is yet another advantage, and the one I believe will be the most significant in the long-run. It asks a platform to leverage its data to ensure compliance with a set of laws in a manner geared towards delegating responsibility to the platform. You might say that the task in question here — computing tax owed, collecting, and remitting it—is technologically trivial. True. But I like this structure because of the potential it represents. It could be a precursor for much more exciting delegated possibilities.
For a couple of decades now, companies of different kinds have been mining the large sets of “data trails” customers provide through their digital interactions. This generates insights of business and social importance. One such effort we are all familiar with is credit card fraud detection. When an unusual pattern of activity is detected, you get a call from your bank’s security team. Sometimes your card is blocked temporarily. The enthusiasm of these digital security systems is sometimes a nuisance, but it stems from your credit card company using sophisticated machine learning techniques to identify patterns that prior experience has told it are associated with a stolen card. It saves billions of dollars in taxpayer and corporate funds by detecting and blocking fraudulent activity swiftly.
A more recent visible example of the power of mining large data sets of customer interaction came in 2008, when Google engineers announced that they could predict flu outbreaks using data collected from Google searches, and track the spread of flu outbreaks in real time, providing information that was well ahead of the information available using the Center for Disease Control’s (CDC) own tracking systems. The Google system’s performance deteriorated after a couple of years, but its impact on public perception of what might be possible using “big data” was immense.
It seems highly unlikely that such a system would have emerged if Google had been asked to hand over anonymized search data to the CDC. In fact, there would have probably been widespread public backlash to this on privacy grounds. Besides, the reason why this capability emerged organically from within Google is partly as a consequence of Google having one of the highest concentrations of computer science and machine learning talent in the world.
Similar approaches hold great promise as a regulatory approach for sharing economy platforms. Consider the issue of discriminatory practices. There has long been anecdotal evidence that some yellow cabs in New York discriminate against some nonwhite passengers. There have been similar concerns that such behavior may start to manifest on ridesharing platforms and in other peer-to-peer markets for accommodation and labor services.
For example, a 2014 study by Benjamin Edelman and Michael Luca of Harvard suggested that African American hosts might have lower pricing power than white hosts on Airbnb. While the study did not conclusively establish that the difference is due to guests discriminating against African American hosts, a follow-up study suggested that guests with “distinctively African American names” were less likely to receive favorable responses for their requests to Airbnb hosts. This research raises a red flag about the need for vigilance as the lines between personal and professional blur.
One solution would be to apply machine-learning techniques to be able to identify patterns associated with discriminatory behavior. No doubt, many platforms are already using such systems. In a September 2014 panel discussion, I participated at the Techonomy Detroit conference, the moderator, Jennifer Bradley of the Aspen Institute, asked Task Rabbit’s (then) president (now CEO) Stacy Brown-Philpot whether the platform had “flags or protections or things that could alert you to discrimination in the system or bad actors.” “We do. We have a data science team that we run [to] constantly to make sure we’re flagging and alerting human beings to actually go through and look at it,” Brown-Philpot replied, “and we actually track data on what drives somebody to select a tasker, and you can see all their pictures so you know what they look like, and the most important thing is a smile. That’s it.”
Data science holds tremendous promise as a way to detect systemic forms of discrimination, often difficult to identify on a case-by-case basis during face-to-face interaction, but which may be brought to light and addressed with data analytics. For example, Lyft and Uber would quite easily be able to detect and flag in real time the patterns of passenger accepts and refusals that might correspond to discriminatory behavior on the part of their drivers. But why leave this for the platforms to volunteer to do if they so choose? Rather, there is the opportunity to delegate the enforcement of a range of different laws to these platforms, perhaps asking for audited records of compliance in exchange.
A more radical alternative, one that has been proposed by both Nick Grossman of Union Square Ventures (whose blog, the Slow Hunch, paints a fascinating picture of regulation in the digital economy), and by influential tech blogger Alex Howard, is to mandate that the platforms hand over actual operational data to city and state governments, who can then use the data to regulate. I prefer my idea of data-driven delegation to this alternative approach of “mandated transparency,” since it raises fewer familiar privacy concerns and poses lower risks of leaking competitively harmful information. There is precedent to this approach—publicly traded corporations are, in some sense, also regulated in a delegated way. They provide audited summary evidence (through their filings with the SEC), rather than being asked to provide raw operational data for a regulator to use in confirming compliance. Correspondingly, leaving the data inside the platform’s own systems, while mandating its use in regulation, seems far more efficient.
As sharing-economy self-regulatory organizations — whether platforms themselves or third party associations that emerge —establish a track record of credibility and enforcement and gain legitimacy as partners in regulation, they can then perhaps be called on to help invent self-regulatory solutions to social issues that are especially difficult to address by centralized governmental intervention. One might imagine a variety of societal objectives being achieved in part by the platforms applying machine-learning techniques to their data to detect patterns, or integrating some notion of social responsibility into the design of their software systems.
This approach of data-driven delegation can yield far more expansive “regulating through data” alternatives than are feasible with complete transparency, and it suggests promising opportunities for self-regulation—ones that are appropriately reflective of the interesting meld of a decentralized marketplace and a centralized institution that sharing-economy platforms represent. Put differently, the sharing economy might offer innovative approaches to not just its own regulation challenges, but to unresolved regulation challenges that predate its emergence.