How A.I. is playing a bigger role in music streaming than you ever imagined
Behind the scenes of some of the most popular music-streaming services, artificial intelligence is hard at work like an automated DJ, deciding which songs listeners will enjoy.
The technology’s ability to learn from the listening habits of millions of users across millions of songs has made the software key for nearly every music-streaming service today.
But its job doesn’t stop there. A.I. is playing an increasing role in some of the more subtle challenges inherent in music streaming, like adjusting sound volumes and eliminating dead air.
For example, Sonos, best known for its wireless audio speakers, in April debuted Sonos Radio, a streaming service that features third-party radio stations as well as the company’s first foray into original music programming. Machine-learning technology provided by a partner, Super Hi-Fi, helps with an important job: creating a smooth transition between songs.
Without it, listeners may end up being annoyed by huge differences in volume between one song and the next. For example, songs recorded in the 1970’s are often quieter than more modern songs, partly due to the recording techniques of that era and changing tastes in music.
Online radio giant iHeartMedia, which has its own streaming and playlist service, also puts Super Hi-Fi’s machine learning to work. The technology prevents brief silence between songs, which could frustrate listeners and cause them to switch to a rival.
“That’s the greatest sin on radio to have dead air,” said Chris Williams, chief product officer for iHeartMedia.
As Super Hi-Fi chief technology officer Brendon Cassidy explained, advances in neural networks, the complicated software that learns patterns from analyzing vast quantities of data, have made more sophisticated audio wizardry possible. The company trains the technology on sound data so that it can accurately adjust sound on the fly.
“We have tried it years ago before all this machine learning stuff was available and weren’t as successful,” Cassidy said.
In addition to using machine learning for the role of playlist DJ, Spotify’s machine learning head Tony Jebara said A.I. helps with some more nuanced tasks. That includes choosing to add surprises to personalized playlists.
Recommending the same song too often—even if a user has listened to it for weeks—could cause them to become bored, Jebara said.
“For music, it’s pretty easy to get someone to consume by giving them what they consumed yesterday—it’s kind of table stakes,” Jebara said. Using A.I. to occasionally “pepper in” surprises based on a person’s prior listening, helps spice up personalized playlists and help prevent them from leaving.
Still, music streaming services remain reliant on human curators and music editors. After all, music is complex—akin to human language—and is difficult for A.I. to completely understand.
Jebara said Spotify’s human music editors identify “things we don’t see in the data,” such as new musical genres and trends. Although great at recognizing patterns within millions of songs, the technology stumbles when trying to analyze songs from a genre it has never been trained to recognize.
Sonos Radio general manager Ryan Taylor said Sonos Radio uses humans rather than technology to curate its music playlists because they are better than today’s A.I. at determining a song is more similar to one by David Bowie than to Led Zeppelin. He refers to these nuances as “not quite tangible elements.”
“The truth is music is entirely subjective,” Taylor said.
“There’s a reason why you listen to Anderson .Paak instead of a song that sounds exactly like Anderson .Paak,” said Taylor, referring to a popular R&B singer.
People like a song because for many reasons, ranging from loving the stories behind their favorite artists to identifying with songs because of a cultural connection. It’s these intangibles that provide context to music, and these difficult-to-describe elements can’t be represented in data that software understands—at least for now.
“At some point in the future, A.I. might be able to pick up on that stuff,” Taylor said. “Ultimately neural networks can get there for sure, but they need more input than a catalog of 80 million tracks.”