A fundamental flaw in most governmental policy-making is that those making the deals and decisions think they are operating with facts. The reality is that they are operating instead with assumptions, many deeply held, about what causes what to happen. A policy is really a statement of assumed causality, and the law of unintended consequences is ever-present.
In a world with a high assumption-to-knowledge ratio, we know that planning as though you know the outcome can lead to disaster. We’ve seen this in the corporate world. For years, I’ve studied major corporate initiatives that have gone dreadfully wrong (think Iridium, WebVan, Amazon’s Fire phone, and the like) and, in almost every instance, what you find is that the leaders sponsoring these initiatives based their decision-making on assumptions.
Given their sponsorship, these assumptions rapidly came to be treated like facts, leading everyone involved down a giant rabbit hole until the obvious problems with the projects were revealed.
The antidote to this lies in adopting what I’ve called a discovery-driven approach to investing and making decisions in an uncertain context. The core idea is to peel back layers of uncertainty at minimal possible cost, planning to the next checkpoint or milestone rather than creating a grand plan that may take you to places you really don’t want to go. It starts by recognizing that you really don’t know what is going to happen when you don’t have a good grasp of cause and effect—yet. Thus, what you want is a plan to learn, rather than a plan to prove that you were right when you made your choices.
Let’s take a look at the consequences of failing to test your assumptions and course-correct in time to avoid bad outcomes. Brexit is an interesting current example. Because it seemed ludicrous that a majority of U.K. citizens would vote for a decision that is so clearly not in their own economic self-interest, nobody believed it was even a possibility.
Without taking sides, it is clear that the way the vote was structured as a simple majority, one option, yes or no vote led to even its proponents being shocked at the outcome. Given the mess that is likely to result, it is pretty clear that the assumptions of its advocates were vastly wrong. Had something more experimental and without such far-reaching consequences been tried first, people could have learned a lot about what disaffected citizens were thinking.
Or take the policy decisions described in a recent Wall Street Journal article. The Journal’s writers went back to 2000 to explore policy decisions that have, in their words, “upended basic assumptions about modern economics and our political system.” In parsing why American median household incomes have fallen 7% since 2000, they uncover many ways in which economists, in particular, got things wrong.
The policy-makers, for instance, thought that central banks could play a significant role in smoothing out economic cycles and reducing the risk of financial bubbles. Subsequent experience suggests that whatever economists thought, the ability of central banks to play this role is far more constrained than we might have hoped. Indeed, there is some evidence that their attempts to do so cause more harm than good. While scholars are on the job, one option might be to employ techniques such as big data and simulations to predict the effects of central bank activity before we actually pass regulations and make decisions.
Another set of assumptions the journalists highlight is that technology, by increasing productivity, would lead to broadly shared prosperity. My colleague over at MIT, Erik Brynjolfsson, was one of the first to document how computerization increased productivity in the 1990’s. He’s now examining, with dismay, how software is leading to job losses among those whose skills could be automated.
We can only imagine what will happen with the accelerating pace of the Internet of everything. We can’t know what the consequences might be, but it is worth asking whether anyone is thinking about how to reduce the uncertainty of the economic impact of such innovations as driverless cars, for instance.
Trade with China is another huge topic on which assumptions have been upended. Lauded as creating great market demand for American goods, opening up to China (and the related dramatic drop in the cost of global shipping created by containerization) has been what the journalists describe as a “shock” of huge proportions to the American economy. As they say, economists were blindsided by outcomes nobody had predicted.
Apparently, no one in a policy situation conjured a world in which 25% of all goods shipped in a container involve trade from China. While focusing on an almost ideological belief in the benefits of unfettered trade, they didn’t ask the question of how the incentives in the system would lead employers to act. Inexpensive labor, favorable treaty agreements, and pressure to offer high returns to shareholders led tens of thousands of employers to invoke a “China strategy” of offshoring just about everything they could, reducing demand for labor in the U.S. and at the same time lowering labor’s bargaining power.
We are also not paying attention to a second major unintended consequence of trade with China. Namely, the fact that entire supply chains have been offshored, with the consequence that even if you wanted to scale a large manufacturing operation in the U.S., it would be prohibitively difficult, an outcome lamented by Andy Grove years ago.
So how would one implement a discovery-driven approach in the policy-making sphere? First, articulate and document the assumptions that underlie the choices made. Next, figure out how those assumptions could be converted to knowledge, as cheaply and quickly as possible. Then, before formulating a grand plan, have people with differing opinions have a candid discussion about how they might arrive at the next milestone or checkpoint.
Also, articulate what I call a “cost to learn”—meaning how much would it be worth to get the answer to a specific question that could offer new facts. Take failure out of the vocabulary; perhaps talk about “hypothesis tests.” Don’t be afraid to change course if emerging facts suggest you’re on the wrong track. And most importantly, plan to learn what will really deliver the desired outcomes, rather than assume you know what you are doing.