Many states now have laws with both criminal and civil penalties punishing citizens who use AI to modify the image of a clothed person to imagine what that person would look like nude. Just this month the city of San Francisco (not coincidentally the current hub of the most innovative and controversial AI work) filed a lawsuit against several companies and individuals. The lawsuit alleges those companies facilitated the violation of various state and federal laws. How did they do that?…by creating open source image generation AI systems which enabled their users to do one of the following:
Create depictions of actual adults as if they were posing nude; and
Create depictions of actual minors as if they were posing nude; and
The lawsuit describes the key violations this way:
One disturbing form of misuse is the adaptation of open-source AI image generation models to create fake pornographic and sexual abuse content depicting real, identifiable women and girls, so-called “deepfake pornography” or “deepnudes.” These models have led to the proliferation of websites and apps that offer to “undress” or “nudify” women and girls.
The obvious harm to both groups is referenced in the complaint. It should be noted that more states than California have such a prohibition and the federal law has had such a prohibition related to minors going back decades. Of course, these historical prohibitions did not contemplate the “how” such images would or could be made, but its legality and various challenges do not depend on the method, just the outcome.
The lawsuit alleges that images made in this way have been used to harass, bully and in some cases extort women and girls. This post will focus on the use of such technology as it relates to adults.
What Does the First Amendment Say?
A lot has been and will continue to be written about the precise boundaries of the First Amendment. In the area of artistic expression, speech, music, art, imagery and videos as examples, the U.S. Supreme Court interprets speech broadly. Content based laws are “presumptively unconstitutional” and subject to a strict scrutiny standard of judicial review. Reed v. Town of Gilbert, 576 U.S. 155, 163 (2015).
Screenshots from AI generated video that has been viewed 12.4 Million Times as of this writing intended to depict Donald Trump and Kamala Harris in a way that perhaps both of them would be upset about.
Strict Scrutiny Revisited
Unless you do First Amendment cases for a living, it is likely that the first and last time you dug into the concept of strict scrutiny was law school. So, a brief explanation makes sense.
Under strict scrutiny, the government is required to demonstrate that the law in question serves a compelling governmental interest and is precisely crafted to achieve that interest. In this context, narrow tailoring generally means that if a less restrictive option can achieve the government’s objective, that option must be chosen by the legislature. Therefore, when content-based laws are challenged under strict scrutiny, the government has the responsibility of proving that any alternative measures would be less effective than the law being challenged.
What is a content based free speech restriction? Here are two examples:
Political Speech Regulation: Suppose a local government enacts a law that prohibits individuals from displaying political signs supporting or opposing a specific candidate within 60 days of an election. This restriction would be content-based because it specifically targets political speech, limiting expression based on the subject matter or viewpoint.
Hate Speech Law: Imagine a law that makes it illegal to use language that disparages or insults a particular religion or ethnic group. This type of law is content-based because it restricts speech based on the specific content or message, in this case, prohibiting expressions that are deemed offensive or harmful toward a specific group.
With that refresher out of the way, back to the concerns of the San Francisco lawsuit. It is reasonable to assume that persons who have never posed nude (or are no longer doing so) could be emotionally upset, offended and perhaps even suffer some real world effects (job loss, friendships harmed, etc) from the appearance online of AI created images of them appearing naked or in fictitious (but real appearing) intimate situations.
However, merely being offended or upset about the content of someone else’s speech is insufficient on its own to push that speech outside First Amendment boundaries. Therein lies the problem. Put aside the obvious psychological harms being addressed in the San Francisco lawsuit, what about other forms of speech?
For some people, the following images or videos would create as much if not more psychological harm, yet they are outside the reach of the law(s) at issue in the above lawsuit.
A person whose religion forbids holding hands with a person not their spouse is depicted walking arm and arm with an AI generated apparent boyfriend or girlfriend;
A devoutly religious Catholic is depicted displaying an objectively obscene hand gesture toward the Pope at a public gathering;
A person is depicted eating something disgusting;
A person is depicted standing along side an actual person holding a sign with an offensive message on it, implying the victim’s assent to the offensive message;
A person whose online brand is their veganism depicted processing and eating a chicken;
A pro-life political candidate depicted walking in the front door of a known family planning clinic with what appears to be his 20 year old daughter;
The only bar to additions to this list is creativity. The point is that there are many AI generated images that are devoid of nudity that can reliably generate the kinds of emotional, psychological, employment and repetitional harms that AI generated nude images also generate. Fair enough say some not well versed in the law - then ban the creation of any image that causes or is reasonably likely to cause the depicted person those harms. Were it so simple.
The Naked Argument
Combining the content based restriction concept with what this law is aimed at reveals something troubling for the constitutionality of the law. The law has good intentions, to prevent victims of AI generated nudity images from enduring the harms that are highly probable to occur. However, these harms can occur in response to AI generated images that lack nudity. It is difficult to see how the current First Amendment jurisprudence, especially that tangent which focuses on art and artists, can distinguish between AI nudity image harms and non AI nudity image harms.
For this analysis, AI is not really the proper focus. AI is not a product, it is a tool. Persons using their photorealistic painting skills could also fall under the laws restrictions for painting the image of a person’s face and an imagined likeness of that person naked. Likewise for sculpture. How about a nearly identical actor or actress in a stage play depicting that person in such a state?
Offense
Despite what many non-lawyers might assume, speech that offends, upsets, disgusts or even causes sadness, anxiety, anger or worry is not - by those outcomes alone - outside the First Amendment’s broad protections. The issue then becomes, will the origin of some offensive reactions be banished and others allowed to flourish? Is there an alternative?
Beyond the other possible challenges, a statute like the one involved in this case will fail if there is some less restrictive means for the government to meet the same ends - protecting victims from the harms that such images have a high probability of creating.
The law here simply says, non-surgically, they are all banned. However, is there a less restrictive way to avoid the harms that come from the assumption viewers may make that the image they are viewing is actually x person naked? A content label on all AI generated content that is published in which the person depicted did not actually pose nude? That’s a law that would like pass a constitutional challenge as we have warning labels for so many other things that have already grooved that legal ground.
The Defense Problem
An AI generated image of a person appears online. They arrive at your office proclaiming that the nude image of what appears to be them is not in fact them. Fair enough. Seems like a clear violation. But, questions.
Where online did the image first appear?
The client may think they know this, but you have to follow up and see if where they first saw it is merely the first place they saw it and not, in fact, the first place the image appeared online.
Do you have any images online?
Social Media
Communication apps like SnapChat, WhatsApp?
Have you ever posted nude images online?
Have you ever sent nude images to anyone?
Who created the AI nude image (if the client knows their identity)
Do you know that person?
Did they communicate with you (possible extortion issue)?
Do they live in the jurisdiction?
Do they live in the United States?
What is the source of the original image that contains your face?
The issue here is a huge one.
Say a user of one of the AI nude tools took the image of this woman and “nudified” it? Pretty easy to see the harm. User is identified and then gets sued (or indicted in some jurisdictions). Seems straightforward, except, the image above is entirely AI generated. That woman does not exist in real life. Now, perhaps someone that looks just like her does exist, as in, is a real human person. What of that fake nude now?
But, let’s stay with the above AI generated person. It was generated with a simple prompt on my laptop using a free app called DiffusionBee. Do you see the problem? If the perpetrator of the AI nude does not know the person whose image they transformed, how are they to know the image of that person is even a real person? This law assumes that the person who originally took an image, presumably found online, and used the AI tool to transform it actually knew that the image they were using was a real person. What would be the basis to make that assumption? Is it fair to assume that all images online that appear to be real persons are, in fact, real persons or that viewers of such images should be required to assume they are real persons?
The Mental State
A statute which touches the First Amendment as this one does cannot survive with a mental state of negligence or even recklessness. Knowledge is going to be required. To prove a viewer’s knowledge that the image they applied the AI transformation to is going to be nearly impossible. It cannot be proven by simply arguing, “the image looked pretty real.” It is going to be required that the user not only knows the person in the image (their face at least) but also knows that image itself is not AI generated. The layers to a credible defense are going to be many.
What if the creator argues they used an image of the victim’s face and another image of what they were told was the victim’s nude body. It turns out that wasn’t accurate, but as to the defendant, the critical question is what they knew, not what reality was.
Hence, all the questions to your client above (and many more) are critical to understanding whether you have a case that you can actually prove under the statute.
The Rock/Hard Place Problem
Like so many First Amendment cases, this fact pattern presents reasonable arguments on both sides. The boundaries of the First Amendment should not include AI nudes because of the harms they are likely to create. The First Amendment protects the creation of these images because the harms such images create are like to create are also present in the creation of images that do not depict any nudity.
Holding the creators of the AI models used to generate such images also presents problems. It harkens back to the creators of the copy machine or fax machine. The machine itself does not infringe on a person’s copyright, but it can be used by bad actors to infringe copyright. Likewise, most AI models are useful for creating images that do not violate these statutes as well as those that do. Is it the AI model maker or the user that bears responsibility for such contraband images?
Not So Conclusive Conclusion
Although the jurisprudence in the area of the First Amendment and sexual depictions is the most meandering and contradictory at times, it is also an area where the court is most comfortable tightening First Amendment boundaries. This leaves no real predictability to the outcome of the case. But, it’s a good bet it does not conclude with the trial level - there many more such cases to come.