In 2022, the Federal Trade Commission (FTC) initiated a rulemaking process to establish the “Rule on Impersonation of Government and Businesses.” The proposed rule aimed to prohibit the impersonation of government entities, businesses, or their officials. The Commission had determined that such impersonation is a widespread issue, as evidenced by public comments received in response to the Advance Notice of Proposed Rulemaking. Seems all government-y and boring. Well, except, one of the underlying features of the modern AI landscape, voice cloning, was definitely on the minds of the FTC folks.
Then, in 2023, the FTC published a voice cloning challenge to address the risks posed by AI-enabled voice cloning technology. It admitted that voice cloning offers potential benefits to consumers and businesses, such as enabling individuals who have lost their voices to communicate again. The FTC’s main concern in their rulemaking, however, was that voice cloning poses significant threats, including fraud, extortion, and the misuse of creative professionals’ voices.
The Challenge focused on three main areas: administrability and feasibility, ensuring ideas can work practically; increased company responsibility and reduced consumer burden, holding businesses accountable while minimizing impacts on consumers; and resilience, developing solutions adaptable to rapidly advancing technology. They stressed the importance of proactive solutions to mitigate harms at the source, akin to its efforts to combat robocalls, which have seen progress through technological innovation spurred by similar challenges.
The FTC’s stated “ultimate goal” was to ensure the benefits of AI are realized without compromising consumer protection or fair competition.
Let The Lawsuits Begin
In May of this year, voice actors Paul Skye Lehrman and Linnea Sage filed a lawsuit against the AI company Lovo, alleging the company misused their voice recordings to create and sell AI-generated versions of their voices without their consent. It turns out, Lovo obtained the plaintiffs voice samples from fiverr.com. Lehrman and Sage claim they were hired through Fiverr under false pretenses, believing their voice samples were for limited, non-commercial purposes. Lehrman was told his recordings were for academic research, while Sage was informed hers would be used internally for test scripts. Despite these assurances, both later discovered their voices being used in commercial contexts, including YouTube videos and podcasts, without their knowledge or agreement. Probably, without being paid is a key claim here as well.
The lawsuit, filed in the Southern District Court of New York, sought class action status including other individuals whose voices may have been similarly exploited. Lehrman and Sage allege that Lovo not only misrepresented its intentions when obtaining their recordings but also marketed and profited from AI-generated versions of their voices under false pretenses. The complaint emphasizes that customers purchasing voice products from Lovo are acquiring what the plaintiffs describe as “stolen property.”
This case joins a growing wave of legal challenges against AI companies brought by creative professionals, including artists and writers, who claim their work has been used without consent to train AI systems. I have written several posts about these various lawsuits involving both AI powered image and video tools. These lawsuits highlight broader concerns about the legal implications of using copyrighted material and personal data to train AI models. Lehrman and Sage are seeking more than $5 million in damages and a court order preventing Lovo from continuing to use their voices. Lovo did not initially comment on the lawsuit.
More recently this year, the plaintiffs filed an amended complaint asserting 16 claims, including violations of the Lanham Act, deceptive practices, unjust enrichment, and copyright infringement, emphasizing the exploitation of their voices without proper consent.
The complaint described Lovo’s business this way:
LOVO does this by allowing subscribing customers to upload a script into its AI-driven software known as “Generator” or “Genny,” and generate a professional-quality voice-over based on certain criteria. For example, LOVO customers can choose between – and designate their preference for – male or female voices, regional accents, and older or younger-sounding voices.
In short, like many competitors, (and even a feature in the latest IOS for iPhones) a small sample of a person’s voice is used to create a voice clone. That voice clone feature enables users to enter any text and have it credibly produced in the voice of the person whose voice samples were used to create the “clone.” This is not akin to creating entirely new voices from an AI algorithm which other companies like ElevenLabs has been doing for years now.
There is another wrinkle in the facts which was conspicuously absent from the original complaint. The plaintiffs surmise that short snippets of their voices that they recorded for payment from a buyer via fiverr.com became the source for the voice clone that Lovo created. Well, the fiver.com agreement transferred all copyrights in those voice snippets to the buyer - a term that both plaintiffs agreed to when providing those audio snippets in return for payment from the fiverr.com buyer.
Lovo Strikes Back
Lovo’s motion to dismiss argues the plaintiffs’ claims lack merit due to contractual provisions in Fiverr’s Terms of Service, which Lovo contends granted them licenses for the voice recordings. The defense highlights the Terms of Service’s provisions regarding intellectual property and commercial rights, asserting the plaintiffs waived their copyrights and allowed commercial use of their recordings. Lovo also argues that digital replicas of voices do not fall under New York state claims, voices cannot be “converted” as property, and Lanham Act claims fail without alleged trademarks. Additionally, Lovo asserts that training AI with licensed recordings does not constitute copyright infringement, particularly as the plaintiffs had not registered their copyrights before filing the lawsuit. I have no idea if a person has attempted to register their voice with the trademark office. I doubt such a registration would be accepted and if accepted I doubt it would be upheld in court. We all have lifetimes of anecdotal evidence in which we have said or thought “that person’s voice sounds just like….” This seems akin to trademarking black hair or blue eyes, etc. Plus, imagine the harms that would flow from allowing such protection for the sound of one’s voice. What if someone later appears on the scene, wants to also be on radio, or sing, or an actor and just happens to have a voice similar enough to an already trademarked voice. Logically, they would be banned from exploiting the sound of their voice in whatever fields a person already operates in whose voice sounds like theirs. Seems unlikely this is the way the trademark office would look at such a registration.
The distinction here is a fine, but important one.
The Voice Beneath Her Wings
In the late 1980s, Ford Motor Company’s advertising folks reached out to several well known singers to get them to sing well known songs associated with them for an ad campaign. When Midler declined their offer, the company got the copyright holder for her well known song and hired someone who sounded just like her to sing it. She sued and eventually won on appeal. However, the ruling was not as broad as to prevent a person who merely sounds like Midler from making a living singing.
The difference here was that Ford was found to have intentionally created the impression that Midler was singing behind the video of their ad. It was this impersonation that persuaded the court to protect Midler’s voice. Had Ford merely hired someone who sounded like Midler to sing a particular song, not one that was closely associated with Midler, then Midler would have been unable to prevent that singer from performing. It was the implied endorsement of Midler of a Ford product by the use of a sound alike singer singing a song that was closely associated with Midler. The ad lacked any disclaimer further leading the court to accept that Ford was attempting to make use of Midler’s reputation and the song’s popularity without obtaining Midler’s consent or, yes, paying her for the use of what sounded just like her voice.
Back to AI and Lovo
The case raises significant factual disputes, including whether Lovo’s initial representations were deceptive and how Fiverr’s licensing terms apply to AI training and cloning. Procedurally, Lovo contends the amended complaint, with its expanded allegations, remains insufficient to survive dismissal. The court’s decision on the motion will likely turn on nuanced interpretations of contractual language, intellectual property law, and the evolving legal landscape surrounding AI and digital voice replication.
This case reflects broader tensions in the legal system as it grapples with AI-enabled voice cloning. Recent developments, including the FTC’s guidance on preventing AI harms, Tennessee’s ELVIS Act, and the proposed federal “NO FAKES” Act, illustrate increasing regulatory attention to protecting individual rights against misuse of their likenesses. Regardless of the court’s ruling in Lovo, this litigation underscores the need for clearer legal frameworks as AI technology continues to challenge existing laws.
However, it also reveals that legislators are wading into territory that will undoubtedly have unintended consequences. For example, what if a company wants to produce a movie that documents the life of a famous actor, politician, etc. Can they now not do that with an actor whose voice sounds like the person they are portraying? This and many other unexpected matters will be the domain of the courts for years to come.