Andrew Baker — Getty Images/Ikon Images
By David Z. Morris
August 7, 2016

On Thursday, a program called Mayhem, created by a Carnegie Mellon team, won the $2 million first prize in the Cyber Grand Challenge, the latest in a series of technology competitions sponsored by the Defense Advanced Research Projects Agency, or DARPA.

Previous DARPA challenges have helped advance the state of the art in self-driving cars and robotics, but this year’s challenge encouraged teams to develop automated hacking software. Mayhem came out on top in an all-automated game of “capture the flag,” in which systems tried to hack each other, while defending themselves from attack—all with no human intervention.

Get Data Sheet, Fortune’s technology newsletter.

According to the Electronic Frontier Foundation, while this is “very cool [and] very innovative,” it “could have been a little dangerous.” While part of the program’s goal is to create automatic systems that detect system vulnerabilities so they can be patched, EFF’s Nate Cardozo, Peter Eckersley, and Jeremy Gillula say that the same technology in the wrong hands could create an epidemic of industrial-scale hacks.

“We are going to start seeing tools that don’t just identify vulnerabilities,” they write, “But automatically write and launch exploits for them.”

EFF’s main concern about automated hacking protocols is that the playing field isn’t even—while it’s relatively straightforward for an iterative program to find vulnerabilities, automatically patching those vulnerabilities is more complex. Sometimes devices, particularly in the Internet of Things, can’t be remotely patched at all. Some devices might lack an upgrade capability even if they do have a wireless connection.

And while some of those devices—often older and less useful for bad actors—wouldn’t be worth manually hacking, the advent of automatic programs could unleash a wave of ‘long tail’ hacks causing significant and unpredictable damage.

The EFF are encouraging researchers in the field to seriously evaluate worst-case scenarios for the technology they develop—to ask how easily it could be turned into a cyber-weapon, and just how much damage it might do. They compare this need for self-assessment to the approach of biologists, who have periodically suspended work with organisms deemed dangerous to public health, and work with strict laboratory controls.

For more on hacking, watch our video.

There is one comforting element in all of this. At least for now, according to Network World, programs like Mayhem are not artificial intelligences—they do not learn as they operate, so there’s no chance that they’ll fulfill Elon Musk’s worst nightmares.

SPONSORED FINANCIAL CONTENT

You May Like

EDIT POST