When Google (GOOGL) unveiled its Duplex “AI” earlier this week, it sparked massive ethical concerns. Belatedly, the company seems to have recognized the problem.
The point of the new virtual assistant is to conduct phone calls on behalf of the user, making appointments and so on. Duplex is the culmination of a lot of Google’s work on machine-learning and natural-language technology—in other words, it sounds and comes across like a real person.
When Google CEO Sundar Pichai demonstrated the service at its I/O developer conference, playing recordings of interactions between Duplex and actual people at a hair salon and a restaurant, the demo rightly wowed a lot of people, but it also outraged many. The problem was that those on the other end of the line apparently had no idea they were talking to a robot—good tech; bad ethics.
As prominent sociologist Zeynep Tufekci put it: “Google Assistant making calls pretending to be human not only without disclosing that it’s a bot, but adding ‘ummm’ and ‘aaah’ to deceive the human on the other end with the room cheering it… horrifying. Silicon Valley is ethically lost, rudderless and has not learned a thing.”
Yesterday, Google finally responded by saying it was “designing this feature with disclosure built-in, and [will] make sure the system is appropriately identified. What we showed at I/O was an early technology demo.”
It’s good that Google is listening, but deeply concerning that it didn’t think to demonstrate these human-friendly tweaks along with its technical advances. As I’ve written before, in the context of Facebook (FB) and the Cambridge Analytica scandal, big tech can’t take people’s trust for granted.
Many people are already freaked out by the implications of AI—from its impact on employment to its lack of human discretion—and suspicious of Google because of the amount of their data it holds. So what does the company do? Run a demonstration that implies they won’t even be able to trust that they’re talking to a real person on the phone, thanks to Google tech.
Human ethics—not just human simulation—need to be baked into these systems from the start, not as a reactive afterthought. And Google, as a company that so many budding technologists look up to, really should try harder on this front.
David Meyer is a Berlin-based writer for Fortune and the author of the book “Control Shift: How Technology Affects You and Your Rights.”