Their concerns ought not be ignored.
The supernaturally intelligent Elon Musk fears where artificial intelligence might lead. In 2014, in remarks at the Massachusetts Institute of Technology, the CEO of Tesla and SpaceX suggested that AI might be humanity’s “biggest existential threat.”
“With artificial intelligence, we are summoning the demon,” he said. “You know all those stories where there’s the guy with the pentagram and the holy water and he’s like, yeah, he’s sure he can control the demon? Doesn’t work out.”
Last year, he expanded on those remarks, telling Kara Swisher and Walt Mossberg on the Recode stage that “not all AI futures are benign.”
“If we recreate some digital superintelligence that exceeds us in every way…by a lot…it’s very important that it be benign,” he said. The frightening alternative is that we’d one day find ourselves living under despotic control, with the dictator in this case being an AI-infused robotic overlord (“or the people controlling the computer,” Musk said)—though he noted that his full position on the matter “would require quite a long explanation.” (To read more about his concerns, see the nonprofit he created with others, called OpenAI, which hopes to discover and enact “the path to safe artificial general intelligence.”)
Musk is hardly the only big-brained human to warn about what big human brains can unleash. Generations before Musk, Albert Einstein, feared the power of a different technology—one he helped conceive: the atomic bomb.
In Einstein: The Life and Times, biographer Ronald Clark reveals a letter that Einstein wrote to fellow physicist Niels Bohr, on Dec. 12, 1944—eight months before the world’s first nuclear-fission bomb was dropped on Hiroshima:
“When the war is over, then there will be in all countries a pursuit of secret war preparations with technological means which will lead inevitably to preventative wars and to destruction even more terrible than the present destruction of life,” Einstein wrote. “The politicians do not appreciate the possibilities and consequently do not know the extent of the menace. Every effort must be made to avert such a development.”
Einstein, of course, had previously written to President Franklin Roosevelt, urging him to build the bomb in the first place, fearing that the Nazis would get there before the Allies. It was not long before he concluded this war-winning device might also one day destroy life on earth.
I bring this up because it is not insane or misguided to be wary of the marvels that are entering our world at breakneck speed. Many fear the power of new technology to harm our health, or uproot our jobs, or bring civilizations to peril. These are smart things to think about, and even debate.
The concerns, well founded or not, are ever-present in the fields of health and medical science. With each new advance, from X-rays to vaccines, there has been both progress and panic, the two often intertwined in an awkward and lasting dance.
In 1955, Scientific American asked the question, somewhat sneeringly, about why so many people were “violently against” the process of fluoridating municipal water—a process that, according to the mandarins of science, was clearly shown to prevent tooth decay and do no harm. (Some people worried that water fluoridation would lead to widespread poisoning; others feared it for a different reason: that it was the vanguard of “socialized medicine” and an assault on individual medical privacy.)
That debate, in case you’re wondering, continues still, with some contending that the practice has led to cancer and other bad health outcomes.
Today, there is even more widespread concern over biological technologies like the gene-editing tool CRISPR—and “gene drives” that have the ability to insert altered genetic material into anything from bacteria to viruses to plants, animals, and humans. Because gene drives can pass such changes down from one generation to the next, each potential alteration in the code of life could have a lasting and unknowable effect.
Altering the genes of mosquitoes, for example, could lead to a world without malaria and the Zika virus—or it could open Pandora’s Box anew.
Such technology, in itself, is neither good nor bad. Its use can, at the same time, produce both, sometimes in ample quantities. And this is important: For most of history, we humans have known that, and opted to take that good with the bad. We’ve done that even when the latter seems very bad indeed.
Back in 1931, Dr. James A. Tobey opined in the Scientific Monthly about “The Hazard of the Automobile”: “In spite of rigid traffic regulations, constant educational campaigns to promote highway safety, and many other efforts, the mortality from automobile accidents is increasing at an alarming rate in the United States,” he wrote. “Automobile accidents now result in about five times as many deaths in this country as does typhoid fever, once a widely prevalent scourge.”
Back then, cars killed more people than diabetes did; they killed more than homicides and suicides combined. (Not so anymore.) But we put up with these newfangled and deadly creations because they’ve, in turn, given us suburban backyards, drive-thru eateries, carpool karaoke, and other luxuries of mobility.
Technology can be frightening—and maybe even should be frightening at times. But we’ve never been very good at stopping its progression, and I would argue that’s a lucky thing.
Genevieve Bell, an Australian anthropologist and historian of the culture of technology, told the Wall Street Journal years ago that early critics of train travel warned “that women’s bodies were not designed to go at 50 miles an hour.” They feared women’s “uteruses would fly out of our bodies as they were accelerated to that speed,” said Bell.
Perhaps that’s something we should fret about with Mr. Musk’s coming Hyperloop, too.
Wait—it seems some folks already are.
This essay appears in today’s edition of the Fortune Brainstorm Health Daily. Get it delivered straight to your inbox.