Amazon’s plan for Alexa to mimic anyone’s voice raises fears it will be used for deepfakes and scams

June 23, 2022, 10:55 AM UTC

Amazon Alexa’s newest feature: bringing people back from the dead.

Amazon is developing a new technology for its voice assistant Alexa, which will be able to mimic any human’s voice, dead or alive, using less than a minute of recorded audio.

At the company’s Re:Mars conference in Las Vegas on Wednesday, Amazon’s senior vice president and head scientist Rohit Prasad demonstrated the feature using a video of a child asking an Amazon device “Alexa, can Grandma finish reading me The Wizard of Oz?”

Alexa confirms the request with its default, robotic voice, then immediately switches to the humanlike, soft, and kind tone of the child’s grandmother.  

Prasad noted during the demonstration that the feature could be used to help memorialize a deceased family member. “So many of us have lost someone we love” during the COVID-19 pandemic, he said—a reality that has pushed the company to make artificial companion-like conversation a key focus of the company.

“While A.I. can’t eliminate that pain of loss, it can definitely make the memories last,” Prasad said.

But despite the uplifting emotional nature of the presentation, the new Alexa capabilities received a quick pushback from some in the technology world. More than as a means for emotional connection, they saw voice mimicry as an ideal tool for deepfakes, criminal scams, and other nefarious ends.

The technology

An Amazon spokesperson told Fortune that Prasad’s presentation was based on Amazon’s exploratory text-to-speech (TTS) research, which is something the company has been exploring using recent advancements in the technology. “We’ve learned to produce a high-quality voice with far less data versus recording in a professional studio,” the spokesperson said.  

The voice mimicry feature is currently in development and the company did not disclose when it intends to roll it out to the public.

The new voice speech pattern technology will need only “less than a minute of recorded audio” to produce a high-quality voice, Prasad said, which is possible “by framing the problem as a voice conversion task and not a speech generation path.”

The new technology might one day become ubiquitous in shoppers’ lives, and Prasad noted it could be used to build trust between users and their Amazon devices.

“One thing that surprised me the most about Alexa is the companionship relationship we have with it. In this companionship role, human attributes of empathy and affect are key for building trust,” he said.  

Fears

While the new mimicry feature may be innovative, it conjures fears in some—including in companies that work in the field—that it could be used for nefarious purposes.

Microsoft, which also created voice mimicry technology to help people with impaired speech, restricted which segments of its business could use the technology on fears it would be used to enable political deep fakes, Microsoft’s chief responsible A.I. officer, Natasha Crampton, told Reuters.

The new feature is also stoking worries online.

“Remember when we told you deepfakes would increase the mistrust, alienation, & epistemic crisis already underway in this culture? Yeah that. That times a LOT,” said Twitter user @wolven, whose bio identifies him as Damien P. Williams, Ph.D. researcher in algorithms, values, and bias.

Some fear the easy way in which scammers could use the technology for their own benefit.

Mike Butcher, editor of TechCrunch’s ClimateTech, noted, “Alexa mimicking a dead family member sounds like a recipe for deep psychological damage.”

Others advised people to stop buying the device completely.

Sign up for the Fortune Features email list so you don’t miss our biggest features, exclusive interviews, and investigations.

Read More

Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward