As the debate around AI regulations rages on, creative industries are taking notice. Music is one of the biggest. In the past year alone, we’ve had AI covering popular songs or faking new releases, Grimes letting people use her voice for their creations, YouTube partnering with record labels to monetize AI music, and more.
Much of the legal debate has centered on copyright. AI models like those used by OpenAI, Meta, and Google are trained on data scraped from across the internet, including vast amounts of copyrighted content. Digital artists have sued Stability AI, Midjourney, and DeviantArt for allegedly taking copyrighted art for training models and regurgitating similar work. Comedian Sarah Silverman and two other authors filed a lawsuit against OpenAI, claiming the company used their books for training. At least one record label executive even claimed infringement after the song with AI Drake came out.
But for many music industry insiders, there’s a more promising legal field: rules that protect against the unauthorized use of someone’s likeness. The approach sidesteps complicated questions about how intellectual property law should apply to AI-generated work. It can draw on existing state-level publicity rules. And in some ways, it cuts to the heart of many artists’ anxieties: the fact that someone is mimicking them without their consent.
“Likeness is definitely the first place you would look to when forming new legislation because an artist’s likeness is based on their voice,” voice cloning platform Kits.ai founder Evan Dhillon says.
Copyright protections cover creative output: fixed creative works that include lyrics, musical compositions, and recordings of songs. In the music industry, ownership of these things is often split; songwriters might own the written content of a song, for instance, while record labels own its recorded final version.
AI poses a tricky problem for labels and musicians under copyright law: establishing ownership of a song that sounds like an artist’s overall output but does not feature a direct copy of any particular work. The music industry saw the impact of AI-powered voice cloning after Tiktok user @ghostwriter77 released an AI-generated song from Drake and The Weeknd that went viral. It had all the imprints of a Drake song and inspired a sternly worded statement from his record label, Universal Music Group. Then, YouTube took the video down for featuring an unauthorized sample — something that has nothing to do with its AI aspects.
“At the core of music is math, and every mathematical combination has already occurred in some way, shape, or form. It’s the performance of that math that changes depending on the singer or the song style,” Justin Blau, co-founder of Royal and a DJ under the name 3LAU, tells The Verge. “Saying something is derivative is a pretty hard argument for copyright owners to make because we all borrow ideas from things that we’ve heard before. AI just does it at a way faster speed.”
Some music industry lawsuits have made this precise claim and won. In 2018, the estate of Marvin Gaye won a $5 million judgment that Robin Thicke’s song “Blurred Lines” was unlawfully derivative of Gaye’s “Got to Give It Up.” The lawsuit reshaped the music industry. But it was followed by contrary verdicts in similar cases against Led Zeppelin and Ed Sheeran. And in all these cases, the claims were made about a specific song — not someone imitating a musician’s voice or overall style.
Likeness laws are fundamentally different from copyright
Overall, the idea is that a person has the right to control their reputation and make money off their identity. The rules often cover the use of identifying features of a person or brand without their permission. This is frequently their face or name or a company’s logo, but after Bette Midler won a lawsuit against Ford Motor Co. for using her voice in a 1988 ad, some states began explicitly adding voice as a protected element.
Most modern lawsuits around the right of publicity and likeness have centered around the unauthorized use of individual images in video games. The band No Doubt filed a lawsuit against Activision in 2009, alleging they never performed songs in the game Band Hero and didn’t permit Activision to use their likeness to play music in the game. The parties settled in 2012. Several NCAA basketball stars sued the NCAA, gaming company EA, and the Collegiate Licensing Company for using their faces and names in a video game without their consent or payment. The courts sided with the players and were the first volley in what eventually led to the NCAA allowing student-athletes to receive payments for using their name, image, and likeness without losing their amateur eligibility to play in college.
Now, as generative AI platforms have sprung up around voice cloning, some of their proprietors think likeness is the most promising venue for granting artists legal protections.
AI tools have inspired a backlash among some artists, but others have proven more open to the technology. Musician Holly Herndon created Holly Plus, an AI-generated voice clone that other artists can use. TuneCore, a music distribution platform working with the artist Grimes to monitor the use of her AI-generated voice, as reported by Billboard, surveyed music artists about AI in music creation, and many responded positively to the technology. But CEO Andreea Gleeson said artists wanted more responsible AI and control over their art.
“Generative AI that is transparent and very clear in what went into creating that AI art goes a long way, but we also need a process of control where artists have a say about how and who gets their likeness,” Gleeson says.
But the overall state of US likeness law remains chaotic. There is no federal right to publicity, and only 14 states have specific statutes covering it. Christian Mammen, a partner at law firm Womble, Bond, and Dickinson, notes that these protections vary. Most say that brands must get an individual’s written permission to use their name, portrait, picture, or voice for advertising or commercial purposes. Rules differ around parodies, digital recreations, and what rights deceased people have over their image.
New York, for example, doesn’t allow the use of a deceased person’s computer-generated replica for scripted work or live performance without prior consent. But California’s laws do not make any mention of original digital replicas. Tennessee, home of country music, was one of the first states to grant a deceased person’s estate the rights to their image.
Lawmakers have considered new AI-related intellectual property regulation, and in several hearings on the topic, some have raised the possibility of using right of publicity rules. But a new legal framework remains months or possibly years away if it happens at all.
AI-enabled music platforms insist they can mitigate risks
Some voice cloning companies restrict who can use a recorded sound or try to give artists a say on how others can use their likeness. Kits.ai, for example, lets people generate voices and license those out so both artists and musicians can profit. Stricter likeness laws — specifically applied to AI — could stand to greatly benefit these platforms by allowing a legal crackdown on unauthorized alternatives.
It isn’t just AI-powered music platforms looking at licensing artists’ voices; even established record labels believe this is a good start. The Financial Times reported Universal Music is in talks with Google to license artists’ voices and melodies for generative AI projects.
But this raises one of the many unsettled issues around likeness law. Music labels frequently own a musician’s recorded output, but it’s less common for them to hold the entirety of a musician’s voice, name, or appearance. Artists often sign agreements with labels for marketing purposes, but many are finding ways to do the marketing themselves, like on social media. But if the right to publicity becomes the standard method for controlling who can profit from an AI-generated song, record labels might rethink how they can add likeness controls in new record contracts.
Artists that offer their AI-generated voices, like Grimes or Herndon, feel assured they have some semblance of control over their likeness. Herndon warned singers against signing any contract on using their voice, tweeting in May that their voice might be used to train some AI models without their knowledge. And when they die, the rights pass on to their estate or heirs. How long that right lasts varies by state; California offers 70 years, New York gives 40 years, and Tennessee only 10 years.
However, not all cases of voice cloning are straightforward. AI platforms can create an entirely new voice using the library of other people’s voices — something that sounds a bit like Grimes but with Rina Sawayama and Phoebe Bridgers mixed in. That is not something the law currently makes room for. Mammen, the lawyer, noted there’s also wiggle room in current laws to allow for homages, parodies, and cover songs. There are also tacit agreements in the industry: Mammen says the music industry “struck a balance” with tribute bands, for instance, in which labels and artists don’t threaten to sue as long as the groups don’t claim to be the original artists.
“If we change the law in order to restrict AI-generated audio ‘sound-alike’ recordings, we also need to consider whether the change in the law would upset the equilibrium for cover bands — an unintended consequence if the law is overbroad,” Mammen said.
We’re just at the beginning of figuring out AI regulation, and yet we’re now seeing how trying to futureproof music may end up upsetting the delicate balance of the music industry.
https://www.theverge.com/2023/9/21/23836337/music-generative-ai-voice-likeness-regulation