Fear of robots, computers, and automation may be at an all-time high since B movies of the 1950s. Not only is there concern about jobs — even white-collar occupations are vulnerable — but big names in technology have weighed in with their worries.
Philanthropist and Microsoft co-founder Bill Gates said, “[I] don’t understand why some people aren’t concerned” about artificial super intelligence that could exceed human control. Physicist Stephen Hawking thinks that “development of full artificial intelligence could spell the end of the human race,” as machines could redesign themselves at a rate that would leave biological evolution in the dust. Tesla Motors CEO and technology investor Elon Musk said research in the area could be like “summoning the demon” that is beyond control. He donated $10 million to the Future of Life Institute, which sponsors research into how humanity can navigate the waters of change in the face of technology.
That’s one camp.
Then there’s another that says doomsday concerns are overblown and that, like a new age FDR, the only thing to fear is fear itself. These people — technologists, economists, and others — say that the combination of artificial intelligence, automation, and robotics will usher in new, better solutions to world problems.
They argue that the fear of technology is old and past experience has proven that while new developments can kill off jobs, they create even more to replace them. Machines could, in theory, replace humans in a wide variety of occupations, but shortcomings in creativity, change, and even common sense are vast, making them unable to in the foreseeable future.
Instead, these people suggest, robots and computers will work side by side with humans, enhancing productivity and opening new vistas of freedom for people to move beyond the drudgery of current life. In short, the coming years will look like all the ones that came before and society will sort itself out. In fact, a new film “Chappie,” due out March 6, depicts an anti-Terminator view, a world in which robots hold the solutions and humans are the bad guys. “You would have something that has 1,000 times the intelligence that we have, looking at the same problems that we look at,” the director Neill Blomkamp told NBC News. “I think the level of benefit would be immeasurable.”
The swings of show biz reflect a deep concern and disagreement over whether technology holds promise or peril. The question comes down to whether the past necessarily predicts the future or if humankind could be in for a nasty shock. Hopefully the optimists will be able to say, “We told you so.” Here are five voices that say worries are overblown and leaps in technology will bring the human race along with them.
David Autor
Professor of Economics and Associate Department Head, Department of Economics, Massachusetts Institute of Technology
"In 1966, the philosopher Michael Polanyi observed, 'We can know more than we can tell... The skill of a driver cannot be replaced by a thorough schooling in the theory of the motorcar; the knowledge I have of my own body differs altogether from the knowledge of its physiology.' Polanyi’s observation largely predates the computer era, but the paradox he identified — that our tacit knowledge of how the world works often exceeds our explicit understanding — foretells much of the history of computerization over the past five decades. ... [J]ournalists and expert commentators overstate the extent of machine substitution for human labor and ignore the strong complementarities. The challenges to substituting machines for workers in tasks requiring adaptability, common sense, and creativity remain immense."
Jeff Hawkins
Executive director and chairman of cognitive theory research organization Redwood Neuroscience Institute, co-founder of Palm Computing, and co-founder of machine intelligence company Numenta
"The machine-intelligence technology we are creating today, based on neocortical principles, will not lead to self- replicating robots with uncontrollable intentions. There won’t be an intelligence explosion. There is no existential threat. This is the reality for the coming decades, and we can easily change direction should new existential threats appear."
Eric Horvitz
Distinguished Scientist & Managing Director, Microsoft Research
"There have been concerns about the long-term prospect that we lose control of certain kinds of intelligences. I fundamentally don't think that's going to happen. I think that we will be very proactive in terms of how we field AI systems, and that in the end we'll be able to get incredible benefits from machine intelligence in all realms of life, from science to education to economics to daily life."
Deborah Johnson
Anne Shirley Carter Olsson Professor of Applied Ethics in the Science, Technology, and Society Program in the School of Engineering and Applied Sciences at the University of Virginia
"Presumably in fully autonomous machines all the tasks are delegated to machines. This, then, poses the responsibility challenge. Imagine a drone circulating in the sky, identifying a combat area, determining which of the humans in the area are enemy combatants and which are noncombatants, and then deciding to fire on enemy targets.
"Although drones of this kind are possible, the description is somewhat misleading. In order for systems of this kind to operate, humans must be involved. Humans make the decisions to delegate to machines; the humans who design the system make decisions about how the machine tasks are performed or, at least, they set the parameters in which the machine decisions will be made; and humans decide whether the machines are reliable enough to be delegated tasks in real-world situations."
Michael Littman
Professor of Computer Science, Brown University
"To be clear, there are indeed concerns about the near-term future of AI — algorithmic traders crashing the economy, or sensitive power grids overreacting to fluctuations and shutting down electricity for large swaths of the population. There's also a concern that systemic biases within academia and industry prevent underrepresented minorities from participating and helping to steer the growth of information technology. These worries should play a central role in the development and deployment of new ideas. But dread predictions of computers suddenly waking up and turning on us are simply not realistic."