While Facebook has historically resisted government efforts to regulate its operations, it’s now become obvious—even to CEO Mark Zuckerberg—that the company is incapable of doing so alone.
But Facebook’s cardinal sin wasn’t how it responded to attempts to investigate and regulate the company. It was that it didn’t ask for government regulation much earlier.
Yes, Zuckerberg is a really smart guy. But he doesn’t know everything. There’s no way he could have foreseen every way that malicious actors would misuse Facebook, from spouting hate to misleading people.
His mistake was assuming that he could.
It should have been obvious that when you give more than 2 billion people access to something, someone’s going to try to do something bad. Rather than assuming they could just handle everything themselves, years ago Facebook’s executives clearly should have said publicly, “We can’t control or predict the behavior of nearly a third of the world’s population. This platform has tremendous potential to do good, but it has equal potential to do harm. That’s why we’re partnering with regulators, experts, academics, and other companies to figure out where our service can be misused and do whatever we can to prevent it.”
Would that have automatically prevented Russian and Iranian interference in U.S. politics, or xenophobic Alex Jones rants? Probably not. But it very well may have mitigated it. And when things invariably did go wrong, in place of the finger pointing and excuse making we see now, Facebook could credibly claim that it had done its best.
This isn’t a problem limited to Facebook. Drones, artificial intelligence, cryptocurrency, autonomous vehicles, and many other new ideas pose both amazing potential and tremendous risk. Just as it’d be stupid to say, “We can’t try out any new technologies or ideas until we understand and can control anything that could possibly go wrong,” it’s equally stupid to say, “Because we coded this thing, we can now see the future in every conceivable way—so just trust us to handle everything on our own.”
It’s become clear that we need more, earlier regulation of new industries and ideas. Of course, it’s not that some bureaucrat will know more about a new technology than the entrepreneur who created it. But regulators can think of potential consequences that the company hasn’t. If entrepreneurs and regulators can form solid partnerships, some inevitable problems can be anticipated and preempted. And when things still go wrong, rather than trying to hide their shortcomings like Facebook did, businesses will be able to work together with agencies from a position of trust.
Government is not always a helpful partner. Indeed, most of my time is spent fighting attempts at bad regulation of startups by federal, state, or local government. Regulators sometimes overstep on behalf of industry incumbents who don’t want startups to challenge existing business models. Government should not use the guise of “regulation” to get involved when parties are really fighting over market share.
Tech certainly has an over-regulation problem, but that shouldn’t prevent us from looking closely at its coexistent, and just as pressing, under-regulation problem. Leaving companies like Facebook to their own devices creates unnecessary risk, and assumes omniscience from companies that doesn’t actually exist. Until we take that more seriously, we’ll keep seeing the same problems emerge from tech giants that have grown too large to govern themselves.
Bradley Tusk is the founder and CEO of Tusk Ventures and the author of the recently published memoir, The Fixer: My Adventures Saving Startups from Death by Politics. He does not have any investment in Facebook.