Skip to Content

This Is Why You Can’t Always Trust Data

Innovation. Ideation. Out-of-the-box problem-solving. Creative decision-making. So much of what we value in the brave new economy depends on people having fertile, expansive, and dynamically open minds. So it may come as a surprise to many that the default state for our minds seems to be, well, “closed.” Or at least inhospitable to ideas that differ from the ones already set in our consciousness.

Remarkably, that’s true even in a realm that has, by reputation, long valued exploring the unknown and challenging accepted wisdom: academic research. For a telling example, consider the decades-old debate over whether too much salt in the diet is bad for your health.

The answer to that question, to hear some definitive sources tell it, is straightforward: Yes, it is. Witness, for instance, how Thomas Frieden, the director of the Centers for Disease Control and Prevention (CDC), titled a 2010 paper in the prestigious Annals of Internal Medicine: “We can reduce dietary sodium, save money, and save lives.” Indeed, Frieden called reducing sodium intake in our diet “the most cost effective intervention to control chronic disease” after tobacco control—a conclusion that underpins the CDC’s current nationwide strategy to reduce the public’s consumption of salt.

Such a strong and clear statement from the leading public health official in this country might make you think there is scientific consensus around the harms of elevated salt in populations and about the potential benefit of reducing levels of salt in our food. That’s far from the case.

Consider what researchers Niels Graudal and Gesche Jürgens wrote the following year in the British Medical Journal, another of the world’s most respected medical venues: “It is surprising that many countries have uncritically adopted sodium reduction, which probably is the largest delusion in the history of preventive medicine.”

The National Academy of Medicine is a distinguished nonpartisan body that is frequently called upon to weigh in on the key scientific and health controversies of our time. In a 2010 report, it made clear that “population-wide reductions in sodium could prevent more than 100,000 deaths annually.” But three years later, the Institute of Medicine (now called the National Academy of Medicine) “determined that evidence from studies on direct health outcomes is inconsistent and insufficient to conclude that lowering sodium intakes below 2,300 mg per day either increases or decreases risk of CVD [cardiovascular disease] outcomes (including stroke and CVD mortality) or all-cause mortality in the general U.S. population.” That’s a mouthful of medicalese, to be sure—but the study’s bottom line is, Nobody has proved salt’s health impact one way or another.

Readers have to become critical consumers of writing—to always ask, Where do the data come from? and if other data draw different conclusions?

The existence of controversies in science is neither new nor, in and of itself, alarming. Science is meant to advance through disagreement and the back and forth of ideas. What is noteworthy about the salt controversy, however, is the disconnect between clear divides in the science and the certitude of public health officials.

What could explain this discrepancy?

In a series of papers published over the past few years, my colleagues and I have tried to understand this phenomenon. In papers in the journal Health Affairs (2012), in Science (2013), and forthcoming in the International Journal of Epidemiology, we analyze these two poles in salt research—which, as it turns out, are based on two virtually separate “scientific literatures,” being published by different groups of scientists, each working on their side of the hypothesis, and each referencing each other as they build bodies of work supporting their arguments.

As the field has grown over the past several decades, the polarities have deepened, and the two camps have all but stopped talking to each other. Papers that aim to review the full field instead summarize one side or another—giving us not one but two bodies of knowledge, each firmly aiming to prove only its own thinking. It has thus fallen to public health officials to essentially take sides, choosing between two opposing points of view that the scientists themselves have scarcely tried to reconcile.

This is an unfortunate state of affairs for a matter as important as national health policy, an outcome that presumably would affect hundreds of millions of people. But it also offers some insights that may be valuable for all of us—lessons about how we draw conclusions from data, and how leaders should act on data.

First, the process of generating ideas gives the illusion of novelty of thought. But very few ideas are new. Most ideas, including what I am writing here, replicate previously articulated concepts, perhaps with different shading and nuance. This becomes a problem when a body of writing deepens our adherence to a particular idea, becoming self-referential and self-reinforcing. We do indeed think what we read—and what we read, often, is written by someone who likewise has read a particular set of prior writings, then refashioned it as his own. This calls for the reader to become a critical consumer of writing, to ask where the writing comes from—and, more important, to determine if there’s other writing outside of the reader’s field of vision that reaches other, perhaps dramatically different, conclusions.

Second, decision-makers must frequently choose a course of action in the face of imperfect data. It is why we pay leaders—to make tough calls when circumstances are murky and the path is anything but clear. But when is it wise to make a critical decision on imperfect information, and when is it the better part of valor to recognize that we simply do not know enough to decide, and to act? It is up to the leader to analyze the data to the best of his ability and figure that out. But to do so smartly, the leader needs to understand the genesis of that data. Would knowing, for instance, that you are reading one half of an evenly split body of opinion influence your assessment of what you’re reading? It probably should.

Third, data analysis, be it in universities or in industry, is conducted by someone with a set of incentives. Those incentives too often encourage people to produce analysis that is in line with some overarching narrative favored by the institution or company in which they work. Often, he or she has very little incentive to pose the tough questions—to look at the other body of analysis and ask, “Why are we finding something different? Can we try to resolve this discrepancy?” The result is manufactured consensus within a group of like-minded thinkers, even if another—equally rational—group of thinkers on the other side of town may have arrived at a different conclusion.

The challenge for all of us is to stay informed, but not fully formed in our views. As the great debate over salt suggests, keeping an open mind may be the healthiest thing any decision-maker can do.

Sandro Galea is dean of Boston University’s School of Public Health.

A version of this article appears in the December 15, 2015 issue of Fortune with the headline “The Big Think.”