Facebook’s share price fell more than 20% in after-hours trading yesterday. Why? The company announced revenues and daily active user numbers that fell short of analyst estimates ($13.04 billion vs. $13.32 billion for the quarter, and 1.47 billion rather than 1.48 billion) and, more worryingly for investors, it warned that its increasing privacy efforts would hit its growth rate further in the second half of this year.
Indeed, user numbers in North America are flat and those in Europe have actually fallen. So the expectation that Facebook was just going to shrug off Europe’s hard-hitting new General Data Protection Regulation (GDPR), as well as the privacy scandals that have plagued it through much of the year so far, was way off the mark. These things are making a difference, and they will continue to do so.
Here’s what CFO David Wehner said on the fateful earnings call: revenue growth will probably “decline by high single-digit percentages from prior quarters” and “we are also giving people who use our services more choices around data privacy which may have an impact on our revenue growth.” Wehner also noted that Facebook is trying more to promote its users’ personal Stories timelines, which currently bring in less money—this is as opposed to users’ Newsfeeds, which is where all those problematic fake news issues fester.
So, is this bad? I think journalist-turned-investor Kim-Mai Cutler hit the nail on the head when she noted on Twitter that “the last three to six months have been reporters screaming about [Facebook] to take more civil, public accountability (which I support). So maybe the headline should be that it’s OK [for Facebook to take] a $142B hit to resolve really serious, long-term structural issues.”
The fact is that the changes wrought by the GDPR and the Cambridge Analytica affair were long overdue. Facebook has been able to achieve its mammoth scale largely through its free exploitation of people’s data, and those people are now both wise to the implications and—in Europe and soon California—able to do something about it.
Facebook and other big tech platforms have to seriously address data protection if they are to maintain or regain users’ trust, which is essential for engagement and long-term growth, and stay on the right side of the law while they’re at it. That means taking this hit while they adjust, even if it makes it harder to provide pinpoint targeting for advertisers. No pain, no gain.
But there is one particular long-term issue they need to figure out, and fast. These companies train their algorithms on the “big data” coming from their users. Now that some users get to pull their data out of that mix or limit how it is used, it’s important to ensure that the algorithms stay useful for everyone, and not only the subset of users who exert no control over their data via their newfound privacy options. The future of these companies’ nascent AI efforts depends on making them relevant to all users, no matter where they choose to place themselves on privacy’s sliding scale.