Google Makes a Vague Pledge to Limit Work on Artificial Intelligence in Weapons, Surveillance

June 7, 2018, 11:30 PM UTC

Following months of controversy over a joint artificial intelligence project with the Pentagon, Google said on Thursday that it would refuse to purse any initiatives that are “likely to cause overall harm” including many kinds of weapons and surveillance.

The new principles follow months of debate inside Google over AI technology it had developed for the U.S. military for analyzing drone footage as part of what was known as Project Maven.

Thousands of Google employees signed a petition in April calling on CEO Sundar Pichai to cancel the partnership. The following month, dozens of workers resigned in protest from the company.

Under pressure, Google decided against renewing the contract, and Pichai vowed to clarify Google’s policies.

“We recognize that such powerful technology raises equally powerful questions about its use,” Pichai wrote in introducing seven principles “to guide” the company’s future work.

The principles include aims such as safety, accountability, privacy, avoiding unfair bias, and being “socially beneficial.” In addition, Pichai outlined four areas where Google will not develop or deploy AI.

1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.

2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.

3. Technologies that gather or use information for surveillance violating internationally accepted norms.

4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.

Pichai said Google may work with the military in other areas, including cybersecurity, training, and veterans’ healthcare. Beyond that, the memo’s wording is vague enough to raise questions about how and when it will apply.

Only weapons that have a “principal purpose” of causing injury will be avoided, but it’s unclear which weapons that refers to. Similarly, the internationally accepted norms aren’t specified, with the international community entering a time in which the U.S. is rewriting many norms.

CNBC also noted that Pichai’s vow to “work to limit potentially harmful or abusive applications” is less explicit than previous Google guidelines on AI. Google reportedly said the wording changed because the company can’t control all uses of its AI technology.