Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward

U.N. Moves Towards Possible Ban on Autonomous Weapons

December 24, 2016, 9:24 PM UTC
The Association for Unmanned Vehicle Systems International's annual expo is something like the Detroit Auto Show for drones (though the industry would really rather you not call them "drones.") All the serious players large and small are present, and every one of them has its latest and greatest hardware polished and on display. But to think of what's happening at the Walter E. Washington Convention Center in Washington, D.C. as simply a "drone show" is to miss what's really happening in the unmanned systems space. The remotely piloted unmanned aerial systems (UAS) that have become associated with the word "drone" via America's shadow wars in Pakistan, Yemen, and Somalia are relatively dumb machines next to the emerging class of smarter, more effective, more widely applicable -- and yes, in some cases more deadly -- robotic systems crowding the exhibit hall floor. If anything sets this year's AUVSI expo apart from years past, it's autonomy. The systems on display here do more flying, diving, driving, piloting, orbiting, loitering, and processing all by themselves than any generation of robotic systems before them. The hardware is now tried, tested, and dependable. Consistent with its defense-centric past, most of the more significant developments at AUVSI were military-related, but applications for these technologies reach far beyond the battlefield. What follows is a short highlight reel of the concepts, newsworthy developments, and robotic tech coming out of America's biggest "drone show." Just don't tell anyone we called it that.
Photo: Clay Dillow

89 nations signatory to the United Nations Convention on Conventional Weapons (CCW) have voted to convene groups of governmental experts twice in 2017 to discuss the implications of autonomous weapons, which would choose targets without human oversight. According to Human Rights Watch, the agreement could move the international body towards a broad ban on weapons directed by artificial intelligence.

Autonomous weapons systems—or, more colloquially, killer robots—have been the subject of intense criticism from tech leaders and global thinkers including Elon Musk, Stephen Hawking, and Steve Wozniak. In part, they argue that warfare conducted by robot would risk civilian casualties with relatively little oversight.

Get Data Sheet, Fortune’s technology newsletter.

According to the Campaign to Stop Killer Robots, culpability for those deaths would also be unclear. Much as in the case of driverless cars, the death of a civilian at the hands of a weaponized robot could be variously blamed on its programmer, manufacturer, or operator.

According to Human Rights Watch’s Steve Goose, the development of killer robots would open a Pandora’s Box with irreversible consequences. “Once these weapons exist,” he said in a statement, “there will be no stopping them. The time to act on a pre-emptive ban is now.”

But, again paralleling debates about self-driving cars, others argue that robots could actually decrease civilian casualties. Among other advantages, robots would not act out of fear for their own safety, and could make swift decisions with more clarity than human soldiers.

For more on the ethics of war robots, watch our video.

Some autonomous weapons systems are already in various stages of development, though it’s important to distinguish them from drones remotely controlled by human operators. One publicly known autonomous weapon program is Israel Aerospace Industries’ Harop drone, designed to autonomously target enemy air-defense systems. Sentry guns with autonomous capabilities are already in use in South Korea and Israel, though those weapons are not currently used in autonomous mode.

Similarly, several platforms built by Google’s Boston Dynamics with U.S. defense funding could be weaponized, though they’re publicly touted as rescue robots.