Top AI Researchers—Including Elon Musk—Pledge to Never, Ever Help Anyone Make Killer Robots
Some of the leading lights in the artificial intelligence world have pledged to never help design lethal autonomous weapons—”killer robots” that decide for themselves who should be targeted for death.
“We the undersigned agree that the decision to take a human life should never be delegated to a machine,” states the pledge, signed by the likes of Tesla CEO Elon Musk, the co-founders of the Google-owned DeepMind, and the Larry Page-backed XPrize Foundation.
More than 100 roboticists and AI researchers, again including Musk and DeepMind’s Mustafa Suleyman, last year urged the United Nations to maintain strict oversight of killer robots.
The new pledge, signed by over 2,400 individuals and published at the 2018 International Joint Conference on Artificial Intelligence (IJCAI) in Stockholm, again calls “upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons.”
“These currently being absent, we opt to hold ourselves to a high standard: we will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons,” it reads. “We ask that technology companies and organizations, as well as leaders, policymakers, and other individuals, join us in this pledge.”
The pledge was organized by the Future of Life Institute, the board of which includes Musk alongside other well-known names in the field such as Nick Bostrom and Erik Brynjolfsson—and the actors Morgan Freeman and Alan Alda.
So far, 26 countries have endorsed the proposed ban on killer robots, including China—the others are Algeria, Argentina, Austria, Bolivia, Brazil, Chile, Colombia, Costa Rica, Cuba, Djibouti, Ecuador, Egypt, Ghana, Guatemala, Holy See, Iraq, Mexico, Nicaragua, Pakistan, Panama, Peru, State of Palestine, Uganda, Venezuela, and Zimbabwe.
Earlier this year, dozens of AI and robotics experts boycotted South Korea’s KAIST research university over its reported work with a defense firm on software for “AI-based” missiles. The university had to strenuously deny that it was working on killer robots—it promised not to “conduct any research activities counter to human dignity including autonomous weapons lacking meaningful human control.”