DeepFakes Part 2
A few months ago, we published an edition that touched on DeepFakes lightly. This post is dedicated entirely to the tech and legal issues that will possibly, likely, undoubtedly be raised in the future in conjunction with the use of DeepFakes.
First, What Is A DeepFake?
A "DeepFake"describes a type of synthetic media. Synthetic media includes any image, audio or video intended to appear authentic and unaltered, but which is partially or entirely manufactured. Photoshop and video editing, computer graphics, etc have been around for decades (Jurassic Park was on just the other day and for its time, pretty amazing fake dinosaurs). The rapid advancement of AI, however, has put into the hands of everyone the ability to manufacture DeepFakes. Currently, the most common techniques involve the use of neural networks to generate highly realistic and often deceptive content that can make it appear as though a person is saying or doing something they did not actually say or do.
While other posts have discussed the degree to which the legal profession will be changed by AI, many commentators have generally commented on the use of AI in political advertising. But, there are no limits to the usage of DeepFakes for both innocuous and malicious ends.
The Ubiquity Problem
Tom Hanks just recently faced the stupefying choice by a professional dental practice to use an AI version of Hanks to appear to be endorsing their business. I await the explanation for this usage. But, it will not be the last one. I drove by a small storefront daycare center in my little town and it had large adhesive images of every Disney character you can imagine in the window. There is little likelihood this small business is paying a licensing fee to Disney for the usage of these characters. People subject to DeepFakes like Hanks will reach a point of Whack-A-Mole. There will be too many violations to squash them all and prioritization will rule the day.
AI girlfriends are already a thing. Meta, the owner of Facebook and Instagram, is already paying celebrities a fee to generate AI versions of them. The price is just too lucrative for them to resist, reportedly 5 hours of work by the celebrity for 5 million dollars. Meta’s use of chatbots is not focused on companions but instead for various uses such as “Kendall Jenner's likeness [being] used for Billie…portrayed as a big sister to give users advice. And Tom Brady [as] Bru, a chatbot for debating sports.” Id.
These are not classically Deepfakes. The connotation related to Deepfakes is the use of someone’s image or video (manipulated in most cases) without their consent. Meta’s approach is with celebrity agreements, but still results in something appearing to be that person when, via AI, is not that person entirely. The uncanny valley is apparently upon us.
Who cares what Meta is doing anyhow? Here’s why. The tools Meta is using for its celebrity inspired AI avatars is already open source and available to millions the world over. The creation of these celebrity inspired avatars is never going to be limited to those with contractual arrangements. More to the point for the rest of us (since most people are not famous) millions can be expected to appear here and there, spurring questions from people who know us.
The No Fakes Act
There is currently legislation being discussed in Washington, D.C. to combat the creation of DeepFakes. The No Fakes Act aims to give all persons the right to control the use of their voice or image likeness even after their death. The law surprisingly anticipates First Amendment challenges. It has applicability exceptions for traditional First Amendment protected uses such as the use of the DeepFake as “part of a documentary, docudrama, or historical or biographical work…[or for the purposes of] comment, criticism, scholarship, satire, or parody”. Id.
Detecting the Undetectable?
Into all this fakery are wanna be detectives designing tools to try and detect AI handiwork. For now, the most prominent of these tools are attacking the problem of plagiarism in writing. GPTZero was an idea that a crafty developer created one weekend in the early days of ChatGPT, went to bed, and woke up a viral sensation. For those in school, there was animosity. For those running schools or news organizations always seeking to root out plagiarism, they were interested. A business was. born. However, as a longtime friend of my often said years before I became a developer (or even a lawyer), “any lock a human can make, a human can break.” In this instance, the “is it AI or not?” question is going to be everlasting with the same cat and mouse tension that has always accompanied features of technology like encryption and hacking.
Revenge Porn Legislation As a Guide
In California, the current law regarding revenge porn is Penal Code section 647(j)(4), which makes it a misdemeanor to intentionally distribute images or videos of a person engaged in sexual activity, when the person depicted in the images or videos had a reasonable expectation of privacy and did not consent to the distribution.
A violation of the law is established by distribution or sharing of images or videos of someone else with the intent to harass, annoy, or alarm that person. Even a reckless distribution of such content is sufficient to find a violation.
The law provides for various penalties, including fines, jail time, and restitution, depending on the severity of the offense. Additionally, victims of revenge porn in California may also have the option to pursue civil remedies, such as seeking an injunction or monetary damages.
It is worth noting that the law in California is not limited to revenge porn involving sexual activity, and may also apply to other types of intimate images or videos, such as those taken in a bathroom or changing room.
The law of revenge porn can serve as a guide for protecting people from deepfakes to some extent, as both involve the unauthorized distribution of intimate or private material. However, there are some important differences between the two that make it difficult to apply revenge porn laws directly to deepfakes.
Revenge porn laws typically focus on prohibiting the non-consensual distribution of sexually explicit images or videos of an individual, with the intent to harm or humiliate them. These laws may also provide for civil remedies or criminal penalties for those who violate them. However, deepfakes may involve the use of non-sexual images or videos, and the intent may not necessarily be to harm or humiliate the individual depicted.
Another difference between revenge porn and deepfakes is that revenge porn often involves the use of images or videos that were originally consensually created or shared, whereas deepfakes most often involve the creation of entirely fabricated material. This makes it more difficult to apply existing laws and regulations to deepfakes, as the legal frameworks for dealing with these types of situations are still evolving.
How Deepfakes can be Used for Fraud
The use of Deepfakes for fraud is virtually self-evident. A moving image on the screen appearing to be someone influential to the viewer or multiple viewers. That person then says what they are programmed to say and some or many viewers react to that communication in a predictable way.
There is one real world case in which a human persuaded another person to complete their intended task of committing suicide. The case is commonly known as the "Michelle Carter case." Carter was a teenage girl from Massachusetts charged with involuntary manslaughter for encouraging her then boyfriend, Conrad Roy III, to kill himself in 2014. Carter and Roy had exchanged numerous text messages and phone calls in which Carter encouraged Roy to take his own life, even after he expressed doubt and hesitation.
In 2017, a Massachusetts judge found Carter guilty of involuntary manslaughter and sentenced her to 15 months in prison, with an additional 15 months suspended, as well as probation. The judge concluded Carter's conduct amounted to “wanton and reckless conduct” causing Roy's death.
The case raised questions about the extent to which individuals can be held responsible for the suicide of others, particularly when the conduct at issue involves speech or communication. The case also highlighted the potential for social media and digital communication to facilitate harmful behavior and underscored the need for responsible use of these platforms. Deepfakes and potentially consensually created avatars such as those Meta is currently collecting could easily serve such a role and then, who to sue?
Below is one of the earliest examples of Deepfakes involving the apparent Tom Cruise speaking to the camera.
Here, the creator explicitly labeled the video as a fake. Under the proposed law above, however, that still might not enable them to escape liability. And, before we reflexively murmur First Amendment, consider the reality that so many viewers of such a video will completely miss the disclaimer. They may edit and repost it omitting the original disclaimer as well.
Imagine a law where the depiction of something labeled as fake, could still harm a person’s reputation. Will a toy company really want to continue having a movie star endorse their product when a conspicuously labeled Deepfake, nonetheless, depicts that person in a very unpleasant light? If they lose that endorsement, could they sue the deepfake creator who took every measure to alert the viewer that the content was fake? What about subsequent possessors of the Deepfake who manage to remove the conspicuous label “this is fake” from the content and offer it as authentic or without comment such that it is impliedly authentic?
When does a conspicuously labeled Deepfake lose First Amendment protection as satire or ridicule, commentary or comedy?
The converse, what about authentic videos, purposefully mislabeled deepfakes by malicious actors as well?
In the last few years, some high profile Deepfake examples have come to light. The chief of a UK subsidiary of a German energy firm paid nearly £200,000 into a Hungarian bank account after being phoned by a fraudster who mimicked the German CEO’s voice. The company’s insurers believed the voice was a Deepfake, but the evidence is unclear. Similar scams have reportedly used recorded WhatsApp voice messages.
Election Interference Supercharged
The 2016 Presidential election had its share of video moments. Some were contemporaneous statements made by candidates during the election cycle and were undeniably authentic. Others were historical video of one candidate or another which showed them in a poor light and were, well, deniable. The future of campaign advertising is going to involve Deepfakes. It is also going to involve the now increasingly plausible denials that a given video or audio clip is in fact itself a Deepfake.
Potential voters are now well trained through social media usage to have short attention spans. Into that information habit will come Deepfakes that are easily debunked within 24 or 48 hours. However, millions of potential voters would have already moved on to the next thing and not had the opportunity to have learned that what they now believe about a candidate is actually fictional. Political campaigns the world over know this and will not hesitate to exploit that hole in the voter information consumption habit. Perhaps fining a campaign or organization that is eventually shown to have distributed what turns out to be a Deepfake with inaccurate information? Can organizations deploy Deepfakes of candidates for parody purposes? Is regulation of disclaimers and/or mandatory labeling something that makes sense? Likely, a multi-variate approach is going to be required. There will be no one solution to all the permutations of how candidates or their proxies will use Deepfakes to influence voter preferences.
Some possible solutions:
Education: Educate voters about the risks associated with Deepfakes and the potential for election interference. This can include providing accurate information about the election process and how to identify false or misleading information.
Technology: Develop technology to detect and identify Deepfakes, and to prevent interference with voting systems. This can include the use of artificial intelligence and machine learning to identify and analyze Deepfakes.
Regulation: Develop and implement regulations that address the use of Deepfakes and election interference. This can include penalties for those who engage in these activities, as well as measures to prevent their use in the first place.
Deepfake and The Courtroom
Rule 901. Authenticating or Identifying Evidence
(a) In General. To satisfy the requirement of authenticating or identifying an item of evidence, the proponent must produce evidence sufficient to support a finding that the item is what the proponent claims it is.
(b) Examples. The following are examples only — not a complete list — of evidence that satisfies the requirement:
(1) Testimony of a Witness with Knowledge. Testimony that an item is what it is claimed to be.
(2) Nonexpert Opinion About Handwriting. A nonexpert’s opinion that handwriting is genuine, based on a familiarity with it that was not acquired for the current litigation.
(3) Comparison by an Expert Witness or the Trier of Fact. A comparison with an authenticated specimen by an expert witness or the trier of fact.
(4) Distinctive Characteristics and the Like. The appearance, contents, substance, internal patterns, or other distinctive characteristics of the item, taken together with all the circumstances.
(5) Opinion About a Voice. An opinion identifying a person’s voice — whether heard firsthand or through mechanical or electronic transmission or recording — based on hearing the voice at any time under circumstances that connect it with the alleged speaker.
(6) Evidence About a Telephone Conversation. For a telephone conversation, evidence that a call was made to the number assigned at the time to:
(A) a particular person, if circumstances, including self-identification, show that the person answering was the one called; or
(B) a particular business, if the call was made to a business and the call related to business reasonably transacted over the telephone.
This rule of authentication has regularly been applied to image and video evidence. It has not kept pace with the technology. Into that reality comes the always decreasing cost and sophistication needed to create Deepfakes and defendants, especially those in high profile cases reliant on circumstantial evidence, who have a light bulb go off.
A defendant somewhere in the United States is on trial for murder. A drive by shooting. No video evidence of the crime exists, but some eyewitness testimony of the car and the defendant’s friend driving, and some threats pre-murder from the defendant, and existing beef between the victim and defendant. That might be enough to convict. The defendant’s attorney, however, arrives in court with a thumb drive or CD and on it, is a video of him, clearly identifiable by a unique neck tattoo, dancing and partying with people in Florida, on the beach, on multiple days before, the day of and the day after the murder occurred in Ohio. His lawyer says, “I want the jury to decide if this video is real or not.”
The prosecution would understandably object. But, why is it not appropriate to have the jury decide if the video is real at all? Questions for judges nationwide to confront unless the rule is updated.
Conclusion
The rise of deepfakes has created new challenges for free speech, IP law, regulation of advertising and elections. Its emergency and continued improvement highlights the need to balance the right to free expression with the responsibility to act in a way that is ethical and respectful of others. Reality now can become plausibly deniable. And, the introduction of that evidence in court can consequently become more difficult to accomplish.
Will plausible deepfakes shift stock prices, influence voters and provoke religious tension? It seems a safe bet.