The Legal Landscape of Deepfakes - Part I
Navigating Defamation, Privacy, Intellectual Property, Fraud, Identity Theft, Election Interference, Criminal Activity and Federal Rule 901
We have touched on this issue in at least one previous post as part of a collection of courtroom challenging issues regarding the authentic of image, video and audio evidence. In this dispatch, we focus solely on deepfakes. First, a definition:
Deepfake: The synthetic representation or manipulation of audio, visual, or multimedia content, typically utilizing artificial intelligence (AI) techniques such as deep learning algorithms. It involves the creation or alteration of media content with the intent to depict individuals or events that did not occur in reality or to misrepresent existing individuals or events by superimposing or replacing their appearances or voices in a deceptive manner.
This definition is decidedly dark. Hollywood is rapidly developing this technology for a variety of cost savings reasons. One is the inevitable “re-shoots” of movie scenes required by discoveries in post production editing. Those re-shoots can be expensive and time consuming delaying the film release and adding additional costs. Deepfake technology will enable studios to hire an unknown, but similar looking person to complete the re-shoot as the actor and superimpose over that actor the face, body, expressions, voice of the actual actor.
In the many technology presentations to lawyers that I have given over the past 20 years, I often commented on this undeniable truth. The most prominent force advancing image and video technology online are adult entertainment companies. From the early development of high resolution but low file size content (more easily accessible over very slow early stage Internet modems) to the use of deepfakes, the adult entertainment folks are keenly focused on costs while their consumers are, shall we say, otherwise distracted.
The Internet has been full of deepfake pornography for years now. While it lies in somewhat murky waters (if being created and shown for research purposes for example) the development of deepfakes for this and other commercial purposes continues apace.
Who Else Is Making DeepFakes?
The short answer is, everyone. Everyone from academic and industrial researchers to amateur enthusiasts and visual effects studios. Governments might be dabbling in the technology, too, as part of their online strategies to discredit and disrupt extremist groups, or make contact with targeted individuals, for example. The likelihood that governments would disclose their advancements in this area or for what purpose they are developing deepfakes remains an interesting question in a democracy. Are there legitimate national security uses for deepfake technologies, certainly. Imagine a spy reaching out over the phone with a target and speaking with that person in the voice of their colleague or relative? That kind of connection could be useful in a wide variety of contexts.
A non-existent Bloomberg journalist, “Maisy Kinsley”, who had a profile on LinkedIn and Twitter, was probably a deepfake. Another LinkedIn fake, “Katie Jones”, claimed to work at the Center for Strategic and International Studies, but is thought to be a deepfake created for a foreign spying operation.
Are There Non-Nefarious Reasons to Make DeepFakes?
Are deepfakes always malicious?
Not at all. Many are entertaining and some are helpful. Voice-cloning deepfakes can restore people’s voices when they lose them to disease. Deepfake videos can enliven galleries and museums. In Florida, the Dalí museum has a deepfake of the surrealist painter who introduces his art and takes selfies with visitors. For the entertainment industry, technology can be used to improve the dubbing on foreign-language films, and more controversially, resurrect dead actors. For example, the late James Dean is due to star in Finding Jack, a Vietnam war movie.
An early example with VoiceOver provided by an actor
Zuckerberg Boasting About the Data He Has
An Actor Saying New Lines
Watch a person appear to be an accomplished dancer
One of the famous Deepfakes, Tom Cruise and his newest premiere date, Paris Hilton. (not real)
What is Defamation?
Most of us can recite this, but just in case, let’s break down the two forms here: Libel is a written statement and slander is a spoken one which is false. But, false is not enough. To be actionable the false statement must cause harm.
How Deepfakes can be Used for Defamation
It does not create much creativity to imagine how deepfake audio or video can be used to defame individuals. However, deepfakes can also be used to cause other persons to be hit with defamation claims for things they never said or did but are visible in videos or heard in audio that was created via deepfake technologies.
In a recent hearing on AI regulation in Washington, D.C. an elderly representative demonstrated how easy it is for anyone to have their voice cloned and for words to be recorded they never said.
Deepfakes are increasingly present across the internet, with Sensity AI finding that the number of fake videos online has roughly doubled every six months since 2018. While there has been discussion of the danger of deepfake technology to politics, disinformation, and democracy, deepfakes are also a critical matter of women’s rights and gender based violence. In 2019, AI firm Deeptrace found that 96% of deepfake videos were pornographic — nearly all of which manipulated images of women. There is no independent review of the data supporting this claim, but it is certainly a feature of adult entertainment content and content appearing to be adult entertainment depicting famous persons.
The creation of this kind of content does not require that the defamed person actually circulated any video or images of themselves engaging in such conduct. This can now be accomplished using completely PG photos from anyone’s social media.
Sexually explicit media online can cause victims severe repercussions, including barriers to or loss of employment, harassment, social isolation, and threats or acts of violence — not to mention the inherent mental toll of trauma. California and some other states already have “revenge porn” statutes. These statutes prohibit persons from publicizing images or videos of sexual conduct by adults without their consent. This reality sets a precedent that could be followed by states in outlawing the publication of such content generated wholly via AI tools. Whether the regulation of such conduct is properly a defamation claim or some new form of abuse is being debated in legislatures and in public across the country. The question being asked is whether defamation law is too weak a tool. In addition, the prosecution of such claims can often exacerbate the vicim’s situation by shining a light on the content itself which might still be accessible in some dark corners of the Internet.
An AI app, available since 2021, easily enables the swapping of one woman with another in adult entertainment content. (Source)
But, Are There Defenses?
To prove defamation, the deepfake audio or video must be offered in a way that the publisher or poster is claiming it is authentic and true. However, it would be trivially easy for producers and distributers of non-consensual deepfake pornography to skirt this issue entirely by simply posting “fake” in the title, without ever addressing the core problem posed by deepfakes — that is, the lack of consent. The Tom Cruise example above is just such a case. It clearly displays words over the video indicating it is fake. But, is there still possibly defamation depending on what is depicted?
As far back as 2010, the iOS Appstore allowed an app called Nude It which enabled users to load in photos of clothed individuals on their phone and see what the AI model imagined that person looked like nude. (Source)
Deepfakes are yet another example of technology growing exponentially faster than the law, leaving people already at greater risk of harm without legal protection. However, haste in crafting regulation of AI could stifle innovation, put us behind our global competitors and imperil our military readiness.
In a first-ever part II of a topic, to be published next Monday, we will explore privacy issues, election fraud, disinformation campaigns and the collusion between deepfakes and the First Amendment among other issues.
If you like Legal AI, please forward this article on to your friends and colleagues and subscribe if you haven’t already. Until next time.
Just before this post was ready to get published, An AI image of an explosion at the US Pentagon was spread online. Check out how it affected the news and stock market here.