AI, Politics and The 1st Amendment
Do Politicians and Their Supporters have less Free Speech rights?
In late April of 2024 the Florida legislature passed and Governor signed legislation entitled, Artificial Intelligence Use in Political Advertising. It is not the first state to do so. Michigan, Minnesota, Texas, California and Washington beat them to it. Notice something about this collection of states not only worried about AI in politics, but enacting legislation in response to it? (Spoiler Alert, both sides of the political aisle think
this is a problem worth worrying about and worth legislating about). But, as with the first generation of anything (anyone using the first edition iPhone still?), let’s take a look at not just the intent of these laws, but some anticipated unintended consequences.
Florida
The devil is in the details definitions.
106.145 Use of artificial intelligence.—
(1) As used in this section, the term "generative artificial intelligence" means a machine-based system that can, for a given set of human-defined objectives, emulate the structure and characteristics of input data in order to generate derived synthetic content including images, videos, audio, text, and other digital content.
Defining, AI, it turns out, is not as straightforward as it seems. Software is already designed that can create a set of “non-human-defined” objectives. Also, what does it mean to “emulate the structure and characteristics of input data?” I have built and am building AI applications in multiple domains right now. I practiced law for more than a decade exclusively involving technology in criminal and civil cases. I could not explain that statement to a client in a way that I felt was authoritative and comprehensive. Input data to LLMs, for example, is increasingly “synthetic data.” That term has come to mean data that is created by computers. Ironically, the focus on the use of synthetic data to train LLMs (the tools being used for generative AI today) came about in response to (unintended consequences alert) lawsuits by creators claiming LLMs improperly used their copyrighted data for training.
The Amorphous “Appears to Depict”
Any digital content created using AI is punishable “if the generated content appears to depict a real person performing an action that did not actually occur, and if the generated content was created with intent to injure a candidate or to deceive regarding a ballot issue”
The first notion that comes to mind here is hasn’t political advertising forever contained depictions of candidates doing things they never did? I can imagine some opponent criticism that depicts a politician digging a hole and money flowing in to comment on their supposed poor economic record. They didn’t dig any hole and reams of paper money never fell into that hole. So, now, if AI is used to create that depiction with less effort than years past, it now has to contain a disclaimer? Seems so.
It’s apparent that this provision was designed to address circumstances where the depiction portrays someone in a plausibly real scenario. Perhaps those that cause the viewer to wonder, “was she really driving her car toward the water and plunging in?” because the advertisement was remarking on a candidate’s supposed recklessness.
The loophole here is obvious. Just make the ad without using anything that meets the definition of AI in the statute and you are good to go. In one sense the statute is saying, “make the misinformation of your political ad the hard way (i.e. non-AI tools).”
Graphics
Among some of the obvious concerns about fake audio, video, etc, the statute includes “graphics.” While the legislature had its eye on the spread of misinformation at scale, it has quite literally outlawed the use by candidates of tools like Canva which is a dominant graphics platform today. And, presumably all of Canva’s competitors are also infusing their tools with AI to aid customers in graphics creation. What is to become of such tools now and how will competitors for the business of Florida candidates market to them? “Our tool does not use any AI to make graphics. Try our slower, less creative and much more time intensive graphics tool and comply with the graphic design law in Florida.” Facetious advertising idea, of course, but not so far off. Companies will now have to do something to assure candidates and their committees (along with political action committees) that they either do not use AI in their tools or have a version of them devoid of AI components.
What Might Be Missing
The statute punishes violations for candidates failing to provide the required disclaimers in advertising with a misdemeanor and a fine. What the statute omits, however, is a mechanism to defeat the value of using AI to create misinformation. How so? There is no mechanism for a complainant under the statute (who can be any person from anywhere - bots anyone?) to actually achieve a removal or retraction of such an advertisement. So, as the axiom goes, the lie will get around the world before the truth gets its boots on.
Given the nature of campaigns and the often late developing high impact stories near Election Day, candidates will be incentivized to hold back their best AI fakes until right before the election. Even if a complaint is received, and the advertisement is later determined to be a statutory violation, the election and its results are long done. The statute does indicate these complaints should be heard in an “expedited hearing.” Can you see the strategy here though? A rival waits until 7 days before the election, has a pile of these complaints filed (whether believed to be authentic or merely to clog the system) and the targeted candidate is mired in hearings, lawyers and other matters in the last week of a tight campaign. Do not take these criticisms as coming from a position of knowing better. The problem of misinformation is a challenging one and handling it in politics particular acute. Legislation like this is not surprising. It will always struggle with the ways in which to challenge these issues.
That Pesky 1st Amendment Thing
Finally, in the rush to try and deal with a new and rapidly evolving problem in political communication, rights under the 1st Amendment loom. What does it actually prohibit and permit when it comes to making content that “appears to depict” individuals engaged in conduct that never occurred.
Justice Black wrote on this topic in Mills v. State of Alabama, “Whatever differences may exist about interpretations of the First Amendment, there is practically universal agreement that a major purpose of that Amendment was to protect the free discussion of governmental affairs.” Political speech would include discussions of candidates and the political process. In non-political forums, the history of protected satire, ridicule, etc is well grounded. This statute and others like them posit a world where the use of AI somehow deprives candidates of the ability to invoke satire and ridicule without alerting viewers that AI helped create that content. Perhaps adding the disclaimer is not a huge First Amendment incursion. The statute does not prevent ridicule or satire or even depicting candidates engaged in conduct they never engaged it. It merely requires a disclaimer so that the consumer of such content is alerted that the apparent depiction is not a real depiction.
In the well known “corporate expenditures as political speech” case of Citizens United v. Federal Election Committee the court found that disclosure of the source of funds used in political speech did not offend First Amendment rights. In that way, the portion of the Florida AI political speech statute (and those of other states) requiring disclaimers on AI powered communications is likely to be permissible. The more difficult challenge will be candidates and their staff trying to figure out “was this content produced using AI?” I honestly do not know if any part of this substack platform is, behind the scenes, using AI. Does that count as a violation if a candidate simply requests graphics from a designer and the designer does not realize the tool used had AI somewhere in it?
Opening For Bad Actors
When I used to teach a few technology law courses as an adjunct professor, I told students, one of the best ways to prepare for the opposition in court is to see things from their perspective. If you were a prosecutor, how would you have committed the crime of which the accused is charged? In so doing, it is possible you might stumble on places or circumstances or angles where errors could have been made. This makes your search for useful evidence perhaps easier and your angle of attack more unconventional.
Similarly here, given that there is now a mandatory AI disclaimer statute, bad actors can perhaps exploit voters. How? Well, if an advertisement uses AI and “appears to depict” a rival engaged in conduct that never occurred and omits the required disclosure, will voters then place their trust in that communication? Of course, if you habituate voters to the placement of disclaimers done by ethical candidates, those in support of rivals who have no such adherence to ethics can create AI powered content omitting the disclaimer. That omission, in another case of unintended consequences, imbues the AI misinformation of bad actors with heightened believability in the eyes of voter/viewers. Think like a bad guy and read the statute again.
More Definitions….
Section (2) of the new law (taking effect July 1, 2024) has this potentially slippery slope: “If a political advertisement….or other miscellaneous advertisement of a political nature….” The statute does not define (and probably cannot sufficiently do so) what constitutes an advertisement of a political nature. As written, the statute does not merely require compliance for candidates, but also for any advertisement of a political nature. Is my post on Twitter, or story on Facebook or audio of my podcast something that is of a political nature? Some folks today believe that everything personal is political anyhow. Depending on courts interpret this provision, the statute may well encompass everything that is online anywhere that can plausibly be argued to be of a political nature. I presume the drafters here meant to restrict application of the statute to political (as in candidate and their supporters) advertisements. But, it does not read so restrictively. As we know with legislation generally, the best of intentions….you know where that leads. Wait, was that a statement of a political nature? Well, no generative AI was used in the creation of this post. So, no one complain about this please. Until next time.