Controversial Google Military AI Contract Fuels AI Regulation Debate

google-advanced-protection-program-apple-apps
This illustration picture taken on April 29, 2018, shows the Google logo displayed on a screen and reflected on a tablet in Paris. Photo by Lionel Bonaventure—AFP/Getty Images
Lionel Bonaventure—AFP/Getty Images

This article first appeared in Data Sheet, Fortune’s daily newsletter on the top tech news. To get it delivered daily to your in-box, sign up here.

The current controversy over data privacy may look like a tempest in a teapot compared to the possible misuses of artificial intelligence. Or as Fei-Fei Li, chief scientist for AI at Google Cloud, put it in a recent email: “I don’t know what would happen if the media starts picking up a theme that Google is secretly building AI weapons or AI technologies to enable weapons for the Defense industry.”

Good morning at midweek. Aaron in for Adam, thinking about whether it should be Arnold Schwarzenegger’s Terminator or Majel Barrett-Roddenberry’s Star Trek computer voice that comes to mind when we consider the future of AI.

A trio of New York Times reporters has dug into the internal debate at Google over developing AI applications for the military after the company won a Pentagon contract to do just that. The reporters got their hands on the Li email, as well as many others, to reveal just how controversial the Pentagon work is inside the company. Some of this has been reported previously, like Gizmodo’s report about Google employees resigning over the contract. But the Times report includes considerable new detail and nuance.

And it also goes to show that we probably can’t rely on tech companies to police themselves when it comes to dangerous AI developments. Not only did Google take the military work, citing the fact that competitors Amazon (AMZN) and Microsoft (MSFT) were already in the running, but its all-too-frequent arrogance was also on display. Google co-founder Sergey Brin explained to employees last week, according to the Times, that it would be better for the world if military groups engaged with Google rather than just traditional defense contractors. That’s a rationalization that could justify almost any unethical or risky decision. Google (GOOGL) also says it’s developing guidelines that will include a ban on AI work in weaponry.

Still, the debate over whether or how to regulate artificial intelligence is just getting started. AI expert Amitai Etzioni and his son and fellow expert Oren Etzioni penned a lengthy essay last year arguing against regulation. He’s out of media favor right now, but Tesla (TSLA) CEO Elon Musk has been the most vocal on the other side of the debate, calling for strong and immediate regulation.

But if there’s one take away from all of the recent reporting, it’s that decisions must be made soon, because the industry is racing ahead now.

Subscribe to Well Adjusted, our newsletter full of simple strategies to work smarter and live better, from the Fortune Well team. Sign up today.

Read More

Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward