Following months of controversy over a joint artificial intelligence project with the Pentagon, Google said on Thursday that it would refuse to purse any initiatives that are “likely to cause overall harm” including many kinds of weapons and surveillance.
The new principles follow months of debate inside Google over AI technology it had developed for the U.S. military for analyzing drone footage as part of what was known as Project Maven.
Under pressure, Google decided against renewing the contract, and Pichai vowed to clarify Google’s policies.
“We recognize that such powerful technology raises equally powerful questions about its use,” Pichai wrote in introducing seven principles “to guide” the company’s future work.
The principles include aims such as safety, accountability, privacy, avoiding unfair bias, and being “socially beneficial.” In addition, Pichai outlined four areas where Google will not develop or deploy AI.
Pichai said Google may work with the military in other areas, including cybersecurity, training, and veterans’ healthcare. Beyond that, the memo’s wording is vague enough to raise questions about how and when it will apply.
Only weapons that have a “principal purpose” of causing injury will be avoided, but it’s unclear which weapons that refers to. Similarly, the internationally accepted norms aren’t specified, with the international community entering a time in which the U.S. is rewriting many norms.
CNBC also noted that Pichai’s vow to “work to limit potentially harmful or abusive applications” is less explicit than previous Google guidelines on AI. Google reportedly said the wording changed because the company can’t control all uses of its AI technology.