In 2024, the Idaho Legislature enacted three laws addressing artificial intelligence (AI) to regulate explicit and political deepfakes and AI-generated material exploitative of children.
House Bill 391 (H0391): This legislation criminalizes the creation and distribution of explicit synthetic media, including AI-generated child pornography. It aims to protect individuals, especially minors, from exploitation through AI-manipulated explicit content.
The Idaho Legislature enacted House Bill 391 (H0391) to address the rising concerns over AI-generated explicit content, commonly known as “deepfakes.” This legislation criminalizes the creation and distribution of explicit synthetic media without the depicted individual’s consent, aiming to protect individuals from exploitation and harassment facilitated by advanced AI technologies.
Background and Motivation
The proliferation of AI technologies enables the creation of highly realistic synthetic media, including explicit content that can be indistinguishable from authentic images or videos. I have written numerous past posts about the regulation of deepfakes and the concerns of civil libertarians about their regulation. Several posts have also tackled a basic understanding of the technology involved, training data, etc.
This deepfake capability has led to instances where individuals’ likenesses are used without consent in explicit materials, resulting in significant emotional and reputational harm.
The American Civil Liberties Union (ACLU) of Idaho expressed concerns about potential overreach and the impact on free speech. In their written testimony, they stated, “We think HB 391 regulates too much potential speech. Lawmakers should achieve the intended aim with a more narrowly crafted bill.” This type of legislation puts legislators and constituents in a difficult position. It requires them navigating the obvious harm of such content involving minors while not infringing on the free speech rights of adults. Also, it requires a deep dive into the U.S. Supreme Court’s original justification for outlawing the production, distribution and possession of such content - because at that time (in the late 1970s and early 1980s) it required harm to a child to produce such content. Reasonable people, then and now, were fully supportive of outlawing the production, distribution and possession of such content to diminish the motivation for others to harm minors to produce it (and at that time sell it in what was then a lucrative financial market). The court reasoned that the government had a compelling interest in stopping the harm to minors required of the production of such content. Given that rationale for outlawing such content nearly 50 years ago, the use of AI to create such content seems to sidestep it. Obviously, using AI to create a realistic, but fake image of such content harms no minor. That rationale, however, would seem a powerful support for laws seeking to prevent the product of AI generated content that depicts an actual minor engaged in such conduct.
Again, here, I have commented in past posts about the reality that non-explicit deepfakes can also cause “significant emotional and reputation harm.”
Examples
Deepfake Involvement in Criminal Activity
• A deepfake image of a person appearing to commit a crime, such as shoplifting, vandalism, or assault.
• Impact: This could lead to false accusations, damage their reputation, and even legal consequences if the image is believed to be genuine.
2. False Political Endorsement
• A deepfake showing a person attending a rally or holding a sign endorsing a controversial or extremist political cause or candidate.
• Impact: This could harm the person’s professional or personal relationships, especially in a polarized political climate.
3. Fabricated Workplace Misconduct
• A deepfake image of someone appearing to engage in unprofessional behavior at work, such as sleeping on the job, drinking alcohol at their desk, or mistreating colleagues.
• Impact: This could damage their career, result in job loss, or tarnish their professional reputation.
4. Inappropriate Social Behavior
• A deepfake of a person behaving inappropriately in a social setting, such as making an offensive gesture, being overly intoxicated, or engaging in other socially unacceptable acts.
• Impact: This could lead to embarrassment, strain on personal relationships, and exclusion from social or community groups.
5. Falsified Academic Dishonesty
• A deepfake image showing someone cheating on an exam, plagiarizing, or tampering with grades or academic documents.
• Impact: This could jeopardize their educational achievements, future academic opportunities, and personal integrity.
You can already see attacks on this type of legislation given that the harm it seeks to address is present in the above non-explicit deepfake content and, arguably, some of this content may be protected speech. Under the current Sullivan standard for public figures, most of the above content would qualify as protected speech. In either consideration, the law recognizes regulations that seek to protect minors can intrude further into free speech rights on balance.
Legislative Definitions
H0391 defines “explicit synthetic media” as any visual depiction that has been created or altered using AI to realistically portray an identifiable individual engaging in sexual conduct. The law makes it a criminal offense to knowingly disclose such media without the depicted person’s consent, with penalties including fines and imprisonment. The legislation also provides exceptions for law enforcement activities and certain legal proceedings. An interesting question for this provision is the circumstance of a person’s regret. Meaning, they formerly approved of the distribution of the image and then, years later, change their mind and want it removed from wherever it is displayed. What then? The legislation does not address that issue which is likely to be a commonplace occurrence in the future given that very young people, with age-appropriate lack of impulse control are going to become regretful adults.
Legislative Support and Testimony
The bill received bipartisan support, with legislators emphasizing the need to adapt existing laws to technological advancements. Representative Bruce Skaug, a sponsor of the bill, stated, “There’s a real problem with pornography… though there are laws addressing pornography, it hasn’t been updated to include advancements in technology that make it easier to produce.”
House Bill 407 (H0407): Known as the FAIR Elections Act, this law prohibits the use of AI-generated synthetic media in electioneering communications. It allows candidates to seek injunctive relief against the dissemination of deceptive AI-generated content that misrepresents them during elections.
In January 2024, the Idaho Legislature introduced House Bill 407 (H0407), known as the Freedom from AI-Rigged (FAIR) Elections Act, to address the growing concern over the use of artificial intelligence (AI) in creating deceptive synthetic media, or “deepfakes,” in political campaigns. This bipartisan effort, led by House Minority Leader Ilana Rubel (D-Boise) and House Judiciary, Rules and Administration Committee Chairman Bruce Skaug (R-Nampa), aims to safeguard the integrity of elections by prohibiting the publication of AI-generated synthetic media in electioneering communications.
Background and Motivation
The rapid advancement of AI technologies has enabled the creation of highly realistic synthetic media that can manipulate a candidate’s recorded speech, photos, or videos, potentially misleading voters. Recognizing the threat this poses to democratic processes, Idaho lawmakers sought to implement legal measures to prevent such misuse. As Rep. Rubel noted, “The technology is really pretty stunning in what it can do to replicate a person’s speech, appearance and voice to the point where somebody could create a video of you speaking that could deceive your own mother.”
Provisions of H0407
The FAIR Elections Act defines “synthetic media” as any audio or visual media generated or substantially manipulated by AI to depict a candidate in a manner that would lead a reasonable person to believe the candidate said or did something they did not. The key provisions include:
• Prohibition of Publication: It is unlawful to publish or distribute synthetic media in electioneering communications without clear disclosure that the media has been manipulated.
• Candidate’s Right to Action: Candidates depicted in such synthetic media can seek injunctive relief to prevent its publication and may pursue damages.
• Exceptions: The Act provides exceptions for media that are clearly parody or satire, or for news organizations reporting on the existence of such synthetic media.
Legislative Support and Testimonies
During the House Judiciary, Rules and Administration Committee hearing, the bill received unanimous support. Rep. Rubel emphasized the necessity of the legislation, stating, “The level of technical manipulation that’s possible now is very different from anything that voters are accustomed to filtering.”
Marsha Bravo, a military veteran and retired educator, testified in favor of the bill, highlighting the importance of informed citizens in the democratic process. She stated, “We all deserve to know that images related to candidates and elections on the internet are real.”
House Bill 575 (H0575): This bill bans the distribution of AI-generated “deepfake” pornography made to resemble real, identifiable individuals without their consent, commonly referred to as “revenge porn.” It addresses the growing issue of non-consensual explicit content created using AI technologies.
As several other states have already outlawed, this type of content involves one adult creating a wholly AI generated image of another adult appearing to engage in intimate conduct. Just like when adults consensually share such actual content and later break off their relationship, this legislation outlaws the entire creation of such content to cause the same harms revenge porn laws target.
Conclusion.
As artificial intelligence continues to evolve at an unprecedented pace, state legislatures are working to address the legal and ethical challenges these technologies present. Idaho’s recent legislative actions this year, including House Bills 391, 407, and 575, highlight the importance of understanding and adapting to emerging legal frameworks. These laws not only reflect growing concerns about AI misuse but also set important precedents for balancing innovation with individual rights and public trust.
For we lawyers, staying current on state legislation is no longer optional—it is essential. Whether advising clients on compliance with new regulations, litigating claims under these laws, or helping businesses navigate the complexities of AI-driven technologies, the role of legal professionals is critical. By staying informed and proactive, lawyers can ensure their clients are not only compliant but also prepared to thrive in an increasingly AI-regulated world.
Now more than ever, the legal profession must be a bridge between technological advancements and the rule of law, ensuring that progress is aligned with the principles of fairness, privacy, and accountability. Keeping a close watch on developments like those in Idaho allows us to better anticipate client needs and provide counsel that is both timely and effective.
NOTE: Social Media Platform, X (formerly twitter) sued to block a similar AI in election advertising law in California.
NOTE:
X, the social media platform owned by Elon Musk, has launched a federal legal challenge against California’s new law targeting election-related deepfakes and deceptive content. Assembly Bill 2655, which was set to take effect in January 2024, mandates that major online platforms either label or remove manipulated media that could mislead voters. The legislation addresses concerns about the misuse of AI-generated content, particularly deepfakes, to distort political messaging during elections. Musk’s company argues that the law infringes on constitutional free speech protections and creates a risk of over-censorship, potentially discouraging platforms from allowing legitimate political commentary.
The introduction of AB 2655 is part of a broader push by California lawmakers to regulate AI’s impact on democratic processes. This law, along with other recently enacted measures, aims to address the challenges posed by rapidly advancing technology in the context of elections. Despite the law’s inclusion of exceptions for parody and satire, X maintains that distinguishing between satire and harmful content could result in excessive moderation. California officials, however, have expressed confidence in the law’s ability to withstand legal scrutiny, framing it as a necessary safeguard for election integrity while preserving freedom of expression.
This lawsuit underscores the growing tension between government efforts to curb AI-driven misinformation and the rights of platforms to regulate their own content. As legal battles unfold, the case raises broader questions about how to balance free speech with the need to protect democratic institutions from the misuse of emerging technologies. California’s approach places it at the forefront of the national debate on technology, elections, and accountability, setting the stage for a critical legal and ethical discussion in the coming months.