Amazon's chief Jeff Bezos, Larry Page of Alphabet, Facebook COO Sheryl Sandberg, Vice President-elect Mike Pence and President-elect Donald Trump at Trump Tower December 14, 2016.
TIMOTHY A. CLARY/AFP/Getty Images

They need to balance user privacy and national security needs.

By James Andrew Lewis
March 13, 2017

The WikiLeaks revelation this week that the Central Intelligence Agency (CIA) has the ability to spy on people by hacking their Internet-connected devices should not have been a surprise. Nor, frankly, should it be a surprise that the commercial technology we all use is inherently hackable.

Technology’s omnipresent vulnerability was the one of the great revelations of Edward Snowden’s National Security Agency (NSA) disclosures. In 2013, Snowden, a former NSA contractor, copied documents revealing that the agency was running a then-undisclosed global surveillance program.

When the Snowden leaks occurred, big technology companies (which are largely American) rushed to show the global market they are not puppets of the U.S. government. Facebook, Microsoft, Google, and Apple all fought to restore trust, adding encryption to their products and refusing to cooperate in investigations. Facebook, for example, strengthened encryption on its messaging app, WhatsApp. In the most well-known case, Apple refused to cooperate with the FBI in gaining access to an encrypted iPhone used by one of the shooters in the 2015 terrorist attack in San Bernardino, Calif. The FBI eventually found a workaround to access the iPhone without Apple’s help.

In standing up to the government, these companies made the case to their customers that American products can be trusted and that American companies would protect their data. This made perfect sense from a commercial perspective, but it’s naive for companies to refuse to cooperate and expect U.S. agencies to just give up. The CIA tools disclosed by WikiLeaks appear designed to work around the defenses tech companies erected after the Snowden revelations. These agencies are well-resourced, determined entities with immense technical skills. When confronted with encryption on a phone or programs designed to make it difficult for governments to access information, intelligence agencies designed tools to get around the new obstacles. And when these tools are compromised, new ones will be built.

The problem now for Silicon Valley is how to reassure their customers again after these new disclosures. If the documents are indeed true, it means that most tech consumers’ devices are open to being hacked by either the government or a malicious actor. The battle between the tech community and the federal government, which came into sharp relief after San Bernardino, may be about to restart. This would serve no one’s interest.

Some in the tech world would like government agencies to immediately reveal any bug or vulnerability they find, at least to the company that made it. We first heard these calls after the Snowden revelations. In response, the Obama administration created something called the Vulnerabilities Equities Process, an interagency review to decide when the U.S. should reveal a vulnerability it had found and when it should keep it secret for use by intelligence or law enforcement agencies. Companies now also call for a Cyber Geneva Convention where all governments would pledge to reveal immediately any vulnerability they have found.

This is a worthy goal, but it faces two serious problems. First, intelligence agencies have little incentive to give this information up. They can justifiably claim that if they find a vulnerability and are not currently using it, they’ll never know when it might come in handy. Second, and perhaps more importantly, the vulnerabilities we know of (even those the CIA knows) are only a fraction of the universe of total vulnerabilities in information technology.

 

Hackers, whether government or criminal, are quick to take advantage of these vulnerabilities. And what’s worse, the universe of exploitable vulnerability is growing as we transition to Internet of Things devices, ranging from toasters to cars that for reasons of cost and design are often not very secure. The problem is not that government isn’t telling Silicon Valley about what it finds, the problem is that Silicon Valley—in addition to some car, television, and appliance companies—writes buggy software.

A replay of the San Bernardino debate won’t help anyone. The tech world may have to accept that vulnerability disclosure is not a panacea. Intelligence agencies could do more harm than good if they promise to never exploit a found vulnerability and tell a company immediately when they find one. At the same time, government finger-pointing at Silicon Valley’s imperfect software or new love of encryption is similarly unhelpful. Societies gain more from using buggy technology than they lose. This is why consumers continue to accept the tradeoff of less privacy for more services.

Washington and Silicon Valley would do better—for national security and business purposes—to avoid mutual blame and look for ways to rebuild the discrete partnership they once had in order to share information and fix problems before hackers can exploit them. The relationship was never perfect, but it was better than the status quo.

The goal should not be to fight over disclosing vulnerabilities or blaming people for finding them, but to reduce their number. This will take time, but if America’s East and West Coasts recognize their mutual interests, they might be able to make progress on information security.

James Andrew Lewis is a senior vice president at the Center for Strategic and International Studies.

SPONSORED FINANCIAL CONTENT

You May Like