Capitol Records’ new A.I. artist could signal the future of the music business
You may have missed it amid the recent blizzard of news, but last week marked an important moment in artificial intelligence. Capitol Records, the music label behind iconic artists like Nat King Cole and Frank Sinatra, announced that it had signed a rapper by the name of FN Meka. Unlike “Ol Blue Eyes” and his other famous predecessors, FN Meka is not human. He is a virtual avatar. More importantly, Meka’s songs are created through A.I.
I’m probably not the best judge for these things, but I found Meka’s new single “Florida Water” to be no worse than much of the human-created popular music one hears these days. “Give me that Patek, need that AP, need that zaza,” is one representative verse.
How much artificial intelligence is actually behind Meka’s music is not entirely clear. The Capitol Records announcement was vague. In an interview with Music Business Worldwide last year, Meka’s creator, Anthony Martini, said that a human voice performs the vocals but the lyrics and composition are the product of A.I.
“We’ve developed a proprietary AI technology that analyzes certain popular songs of a specified genre and generates recommendations for the various elements of song construction: lyrical content, chords, melody, tempo, sounds, etc. We then combine these elements to create the song,” Martini said in the interview.
A couple of thoughts on this:
- This is yet another example of the collision course between intellectual property and artificial intelligence. The recent kerfuffle over text-to-image A.I. art and the human-created artwork that “trained” the A.I. (which we talk about lower in this week’s newsletter) shows how thorny the issue is. In the fraught realm of music—in which artists, labels, and streaming platforms have been clashing over royalties for years—it’s going to be a royal mess. By signing Meka to a label, perhaps the A.I. can be trained exclusively on music from the Capitol Records catalog. But how does Capitol compensate its roster of artists whose music was used to “inspire” Meka? And if Meka is only trained on music from the Capitol catalog, will his songs be anything more than a high-tech remix?
- Despite the sensationalistic nature of an A.I. rapper, and the potential for a new breed of virtual crooners replacing humans, the future may look less like Meka and more like The Wizard of Oz, with A.I. quietly doing the work behind the curtain. As A.I. algorithms for composing music become more widely available, it will be easy for musicians to harness the technology and use it as an expedient tool to crank out more songs. As a listener, you may never know that your favorite artist’s new song was concocted by algorithms.
Today’s edition was curated and written by Jeremy Kahn.
A.I. IN THE NEWS
Tesla data could be used to improve driving safety. But who owns that data? That's the question raised by a New York Times story about the wealth of data that Tesla's advanced cruise control (called Autopilot, but not actually capable of completely autonomous driving) collects and how this data, especially the extensive information it provides about traffic accidents, could prove useful to transportation regulators, highway authorities, insurance companies, and other car manufacturers. The story talks to one lawyer who has used the data in lawsuits and who is trying to set up a business based on collecting, anonymizing, and then selling this data. The problem is, it isn't clear if such information belongs to the customers who own the Tesla vehicles or Tesla itself.
Elon Musk reveals more details of his Optimus robot. In an essay published in a magazine sponsored by the Cyberspace Administration of China, the billionaire entrepreneur said that he wants the humanoid robot that Tesla is building to be able to cook, clean, and mow the grass. He also says the robot, a prototype of which Musk has said may be debuted as soon as the end of September, could help care for the elderly, according to a story about the essay in the tech publication The Register.
Google's A.I. is good at spotting nude images of children. But some parents have been wrongly investigated for potential child abuse and had their Google accounts permanently deleted after sharing innocent images. The New York Times talked to parents who had to send photos of their children's genitals to doctors for legitimate medical purposes, but who were nonetheless banned from Google and reported to the local police. The problem is that while Google has automated systems that are very good at spotting potential child pornography that is uploaded to its cloud-based servers—and which have gotten better at discerning some innocent images of nude children, such as a parent taking a photo of their own toddler frolicking naked at the beach—the systems still don't understand enough contextual information to know when a photo may be being shared for a legitimate purpose. The company told the newspaper that it stood by its decisions in the two cases the paper chronicled, even though law enforcement had quickly cleared the two dads who were involved of any crime. But the company's head of child safety operations also said the company has consulted pediatricians so that its human reviewers could better understand possible conditions that might appear in photographs taken for medical reasons.
Exscientia and Bayer end partnership on drug development. In a rare setback for A.I.-enabled drug discovery, pharmaceutical giant Bayer and the U.K.-based A.I. drug research firm have ended a deal in which the two had been collaborating on finding likely targets for both oncology and cardiovascular drugs. News site Fierce Biotech said that Bayer had paid Exscientia about $1.4 million in revenue so far as part of the partnership, which was signed in 2020 and was worth up to $243 million in upfront fees and future payments if certain development milestones were achieved. The site said Exscientia is retaining the right to develop drugs for one of two targets that had been identified during the collaboration with Bayer. Exscientia's stock, which is publicly traded on the Nasdaq, lost 20% of its value after the company announced the end of the partnership.
EYE ON A.I. TALENT
Seattle energy startup Booster has hired Andrew Hamel to be its chief technology officer. Hamel, according to GeekWire, had been an executive at the company LivePerson and before that held a variety of engineering and machine learning roles at Amazon.
EYE ON A.I. RESEARCH
Can a robot dream of itself? Researchers at Columbia University recently trained a robot arm to learn an image of its entire body from scratch, without any human data. The robot learned completely from trial-and-error, starting with random movements, and then seeking to plan future actions and predict its body position as it performed these tasks. It could learn to answer questions accurately about whether certain coordinates in three-dimensional space would be occupied by its body at a certain time based on the action it was performing. The researchers then tried to use various neural network visualization techniques to try to figure out how the robot imagined itself during different stages of learning. They found that while the robot initially imagined itself as a loose sort of cloud, it gradually learned a highly-accurate image of its own body. The results of the experiment were published in Science Robotics. You can read more about the research here and you can watch a good video about the project here. Why does this matter? Because in order for a robot to perform more complex tasks, especially in crowded environments and around other people or robots, it will need to have a good sense of its own body shape and how it occupies space. This "self-awareness" is essential for the robot to safely plan future movements, especially in new environments, without having to be specifically trained for every new space it enters.
FORTUNE ON A.I.
Is using A.I.-generated media ethical? That question is becoming increasingly urgent as A.I. systems that can create professional-looking images in a wide variety of styles from text descriptions become commercially-available. These systems include OpenAI's DALL-E 2, a similar system created by the A.I. research lab Midjourney, and another called Stable Diffusion created by open-source developer collective Stability AI. The problem is that many artists and illustrators argue that the increasingly capable software is robbing them of potential work and that it is unethical for organizations that can afford to pay humans to create visual content to turn to the A.I. software instead. (The artists feel especially aggrieved because these A.I. systems are trained from thousands of images of historical and contemporary pieces of art and illustration found on the Internet. The artists receive no compensation for having unwittingly contributed to this training data. And adding insult to injury, it is possible to prompt one of these A.I. systems to create an image in the specific style of a particular artist.) Charlie Warzel, a journalist who covers the intersection of technology and culture and writes the "Galaxy Brain" newsletter for The Atlantic stumbled into this controversy when he used Midjourney's A.I. image creation software to create an image of the far-right radio shock jock Alex Jones to illustrate a recent newsletter post. Warzel was attacked on Twitter for doing so, and he subsequently wrote a blog post apologizing for having used Midjourney and discussing the whole controversy. But Warzel is not going to be the last person to encounter this issue as A.I. image generation becomes increasingly mainstream. Whether artists and illustrators will succeed in compelling large corporations and media organizations to eschew the use of such software remains to be seen. But I wouldn't bet on it. The software is becoming too capable to ignore. (Check out this Twitter thread for plenty of mind-blowing examples.)
Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.