Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward

Deciding Whether To Fear or Celebrate Google’s Mind-Blowing AI Demo

May 10, 2018, 1:02 PM UTC

This article first appeared in Data Sheet, Fortune’s daily newsletter on the top tech news. To get it delivered daily to your in-box, sign up here.

Are there imaginable digital computers which would do well in the imitation game?

Good morning. Aaron in for Adam, contemplating computer scientist and mathematician Alan Turing’s famous conjecture to test whether a machine could think.

The wow factor was quite high, maybe off the charts, this week when Google (GOOGL) debuted recordings of its Duplex AI app making phone calls and conversing with regular people at restaurants and a nail salon. Duplex sounded amazingly human, smoothly navigating the minor inconveniences of booking appointments and even uttering the occasional “um” and “mmhm” to make sure the person on the other end knew it was still there. It sure seemed like Duplex had aced the “imitation game.”

But somewhere between the “um” and the “mhmm,” the creepiness factor started to rise and people began to imagine how this creation could be used for ill. Would robocallers, scam artists, and hackers start employing Duplex the better to dupe unwitting consumers? Would interactions with workers in the service industry be further dehumanized? Was the service just the latest “invasive” and “infantilizing” development from the clueless coders of Silicon Valley?

Most of the concerns revolved around how Duplex works and how it will be used. Most could also apply to virtually any AI app intended to interact with humans. Perhaps a deeper question, then, is how should society regulate the coming wave of artificial intelligence, if at all. Will we rely on self regulation by industry, as we have in so many other areas? Perhaps just a further evolution of Isaac Asimov’s Three Laws of Robotics is required? Or should laws be passed setting out acceptable and unacceptable AI practices?

Whatever choices are made, they should be made intentionally and with serious consideration. We are almost 20 years out from Harvard Professor Larry Lessig’s groundbreaking essay “Code Is Law,” and it still feels like one of the most important texts guiding us into the future.

Our choice is not between “regulation” and “no regulation.” The code regulates. It implements values, or not. It enables freedoms, or disables them. It protects privacy, or promotes monitoring. People choose how the code does these things. People write the code. Thus the choice is not whether people will decide how cyberspace regulates. People—coders—will. The only choice is whether we collectively will have a role in their choice—and thus in determining how these values regulate—or whether collectively we will allow the coders to select our values for us.

Let us know what you think of the debate.