Authentimacation
Something look wrong with that header? Yes. Why? Because that word is not correct. It is not even a word, well, it is not authentically a word. Authentication is something children understand. My 6 year old and I play “fiction or non-fiction” all the time. I ask her about dragons, or elephants or my claim I can run and jump over the house. She responds with either “fiction” or '“non-fiction.”
Why does it matter? IRL (In the real world for non-teenagers) it doesn’t always matter. In court, it always does. In fact, authentication is so important, there is a separate rule of evidence dedicated to it and countless cases discussing how judges should make determinations about the authenticity of disputed evidence.
Courts Have Always Wrestled With Authentication
Before photography was a thing (that is, before the 1830s) courts authenticated contracts, crime scene evidence, the identity of horses, the ownership of property and so forth. From the 1830s until today they also dealt with the authentication of images, audio and eventually video. But, why the concern over the authenticity of such things? Isn’t this a case of “I know it when I see it.” Short answer, no. That has never been the case. Since the dawn of photography, about 20 minutes later, was the dawn of fake photography. Without belaboring all the historically difficult and time-consuming methods needed to make an undetectably altered photography, we all know by now, it has been trivial to accomplish with computer based tools for more than 50 years and now, it’s free and done seamlessly with tools on all of our phones.
If ever there was a time for courts to strictly enforce the rule of authentication for documents, photographs and video content, it is now.
Nothing Is Authentic?
That is the position of the rules of evidence by implication. Having a rule requiring steps be completed to enable a piece of evidence to be accepted as authentic implies that nothing is authentic until proven so. Burden is on the party offering the evidence to establish its authenticity. So, yes, nothing is authentic. Even where the rules self-establish authenticity (for example, public records) the offering party still has to present credible evidence that the document is a fair and accurate duplicate of a public record (county seal on the record, or documentation from the governmental agency or person responsible to maintain the record, etc).
There just is not now and has never been a “judge, come on, look at this contract, it has ink signatures on it, doesn’t it look good to you, just admit it.”
Relative Relevance
Surprising to many attorneys, the first step in establishing the authenticity of evidence is to persuade the judge it is relevant. After all, why bother with authenticity requirements if the document, image, audio or video is not relevant to an issue in the case? Once you get past relevance, which temporarily assumes authenticity, then the authentication battle begins. But, does it?
I have been in countless pre-trial courtroom arguments about evidence authenticity, images and videos mostly, in which co-counsel or prosecutors or even the court viewed the disputed evidence and conveyed the notion that since it looked real, “what was the issue here Dean?”
This really should not be the default especially in 2024. AI is here in case your judge didn’t notice. The first thought in my head is simply to set your laptop on the judge’s bench and visit thispersondoesnotexist.com. The four images below were generated by AI at that website, free, within 30 seconds. None of them are images of real people. And, they are not composite images created from images of real people. They are images generated entirely by an AI algorithm. They are people that do not and have never existed.
But, of course, images of actual people, witness or parties in your case, can also easily be altered in undetectable ways to place them in locations or depicted engaging in conduct that never occurred. Videos are also, of course, already there as past posts here have discussed in detail.
The Rule
Federal Rule of Evidence 901 handles authentication of disputed evidence. 901(b)(1) contains a non-exhaustive list of examples of what kinds of evidence can be authenticated by what kind of supporting information.
It is easy to tell that many of the examples were written (and have not yet been updated) before the advent of modern computers, their capabilities for undetectable alteration and certainly ignorant of the power of AI which is widely known today.
(1) Testimony of a Witness with Knowledge. Testimony that an item is what it is claimed to be.
(2) Nonexpert Opinion About Handwriting. A nonexpert’s opinion that handwriting is genuine, based on a familiarity with it that was not acquired for the current litigation.
(3) Comparison by an Expert Witness or the Trier of Fact. A comparison with an authenticated specimen by an expert witness or the trier of fact.
(4) Distinctive Characteristics and the Like. The appearance, contents, substance, internal patterns, or other distinctive characteristics of the item, taken together with all the circumstances.
(5) Opinion About a Voice. An opinion identifying a person’s voice — whether heard firsthand or through mechanical or electronic transmission or recording — based on hearing the voice at any time under circumstances that connect it with the alleged speaker.
(6) Evidence About a Telephone Conversation. For a telephone conversation, evidence that a call was made to the number assigned at the time to:
In previous posts I have offered examples of AI voices which can be produced in mere minutes, free on many websites. Social media is replete with examples of famous politicians, actors and others seeming to say things via audio or video that we all know they never said. Recognizing that, the sections above which are still held out, in 2024, in the rules as examples of ways to authenticate such content are quaint...and dangerous.
Ingenuity and Disingenuity
More made up words, disingenuity, which is appropriate for a post about the near-colonial era rules of authentication colliding with the powers of AI in 2024. The challenge for us as lawyers with knowledge of the modern tools of misdirection and the antiquated rules which apply is to somehow persuade a judge to say “I know that is the rule, but it’s not enough.”
Thankfully, courts have discretion as to how to interpret and apply those rules and humans are in the loop as is said with AI these days. The problem is, judges favor doing what other judges have done in the past. Stare Decisis is a useful tool to maintain some legal consistency enabling the public to understand what the law is without it changing with every rogue court decision untethered to the requirements of precedent. That’s a great idea for a system. It’s a terrible idea for a system using 100 year old authentication rules when the humans in that system know those rules are inadequate to the task.
The cynical reality here is that lawyers offering evidence devoid of the features we all know should be present to establish its authenticity (“judge it’s a recording from my client’s phone of her husband threatening her life”) can simply say, “I can meet the rule of authentication on this judge.” Great. But, can you meet the rule of reality? What’s that you say? The rule of reality does not apply. And, that lawyer would be correct. Reality does not apply, the rules do. It’s the classic “riddle, wrapped in a mystery, inside an enigma.” Or is it?
Footprints That Never Erode
Why are many of us a little queasy about using any mobile device? Because, it has a blizzard of sensors which are tracking who knows what, how often and sharing it with who knows who. We all recognize that the utility of such devices, unfortunately, comes with the privacy-infringing reality that so much of what we do all day is being recorded, tracked, stored, shared, etc. For authentication purposes, however, that’s where the solution might lie.
Consider the hypothetical above, the claimed threat in an audio message presented by a potential victim of domestic violence, etc. There are not 100 ways that audio came to rest on the offering witnesses phone if the audio file is authentic. The authenticity of that message, then, becomes trivially easy to establish through electronic footprints. Here are some suggestions;
Bad actor made the statement while the two parties were together at home or at a public event and the offering witness recorded it, secretly or otherwise. If true, what else do you have for purposes of authentication? Other ear witnesses. Data from the bad actor’s phone demonstrating they were at that location at that time at that public event. Other witnesses who saw the bad actor there. Next, evidence from the offering party’s phone that would show the moment that file was recorded by that phone. Where on the phone it was recorded? (i.e. in the place where the app that was used is by default designed to record such audio). The absence on that phone of any audio editing software. The absence on the phone of multiple copies of that audio file. Security footage from the location showing the parties in a discussion/argument.
Granted, much of this information is time-consuming and could be expensive to recover. True. However, if you are representing a person adamantly insisting they did not offer the threat, and, their professional or personal reputation hangs in the balance, you have recourse. There are things to subpoena and arguments to make before simply letting the judge hear the otherwise harrowing threat and conclude your client is a bad actor.
People fake their own kidnappings. People fake email exchanges to persuade others to commit crimes. People use former partners’ email addresses in various ways posing as that person. A woman in a love triangle sent thousands of emails and text messages posing as her romantic rival nearly causing the person she was posing as to be arrested and charged with various harassment crimes.
The paragraph above could go on for pages. The point is made. Simply hearing an audio exchange, reading an email exchange or viewing a video is insufficient to determine it is authentic. Lawyers and judges have to abandon the likely lifelong notion that when we view or hear something that looks authentic it is. Instead, it should be replaced with the philosophy underlying Rule 901. Nothing is authentic until it is proven to be so…and ideally the proof required should consider the reality of AI in 2024.
One example from arguments I have made in court about emails may be of use here. “What is an email?” I would often say to courts when arguing the authenticity of an email being presented in court as printed out on pieces of paper handed to everyone involves. The letter ‘e’ in email means something. It means “electronic.” Therefore, something printed on a piece of paper purporting to be an email is not an email. It is words on paper being offered by a party as the printout of the electronic file of an email. But, it’s worse than that. Emails contain from, to, subject and content of the email itself when we read them on screen. We are used to that. But, what we don’t see is a host of metadata associated with every email. That meta data is things like, was there a bcc on that email? When was it received by the recipient and what IP address was that location when it was received? What was the sender’s IP address? Several cases you can look up online involved fake emails whose fraudulent nature was easily determined by reviewing this meta data and noticing that the IP address from which the email was sent claiming to be sent by John Smith belonged to someone else. No part of that can be known from a piece of paper with words printed on it purporting to be the accurate copy of an electronic file that contained the contents of an authentic email.
Personally, I would never accept the authenticity of a claimed email whose only evidence was something opposing counsel offered printed on a piece of paper. You are just asking for trouble for your client, and professional responsibility issues later if your client receives an adverse criminal or civil ruling and a later lawyer realizes the “emails” upon which it was based were Microsoft Word documents cooked up by someone.
The point of all of this is that proving the authenticity of audio, video, images and even documents that are not the subject of nefarious alteration is fairly straightforward. The information to establish that authenticity resides on the device(s) where that electronic information was created, recorded, etc.
The ideal default rule should be that if a party offering such evidence cannot bring forward the electronic footprints showing the provenance of the file which resulted in the offered evidence, they lose. Why? Because that evidence is being created automatically and in multiple ways on such devices enabling the offering party an easy opportunity to establish its authenticity.
Certainly, folks versed in AI tools will discover ways to implant electronic files on devices in ways that mask their fraudulent origins. But, for now, hacking the entire internet to create a fake trail of an email from your computer to mine is likely out of reach and even if it is regularly in reach of hackers, your client is not likely one of them with that capability. Companies that build the network devices and data centers have some of the most robust security available. They are carrying banking information, military secrets and other important information. Their sole focus day in and day out is how to wall that information off from hackers determined to try and access it, potentially copy or misdirect it and ultimately produce it in a way that makes it appear authentic.
AI Laws and Guidelines Skipped This Part
I have written about recently, and will continue to do so, about both domestic and international AI regulation. While many of those regulations will evolve over time, all of them are missing a huge hole in the system that affects all of us. They are all focused on the use of AI in ways that could generate unfairness at scale. A good instinct to be sure. However, in the U.S. so many of our lives come into contact with the justice system in ways that often have lifelong effects. What AI regulations and guidelines need to address is the insistence that no evidence be admitted in court without a heavy burden on the offering party given the ease of AI powered manipulation of all forms of evidence.
It is understandable that the current philosophy of AI regulation is targeting where it can do the most good (or avoid the most harm) for the largest number of people. Our bias as attorneys is understandable as well. We see the effects of the justice system every day on our clients and for many of us on our own sense of unfairness or fairness.
Nothing seems as fundamental to obtaining justice in the system as ensuring that the things we are all arguing about are at the very least - authentic. What a waste of time to be in a hearing, trial, deposition, litigation generally and only later find out the information other than testimony that we spent so much time and energy on turned out to be fraudulent.
Conclusion (Or is it to be continued….?)
In this era of technological advancement, particularly with the emergence of sophisticated AI tools, the challenge of authenticating evidence has never been more daunting and potentially expensive. The historical rules (and case law) regarding authentication, though steeped in tradition and legal precedent, are increasingly inadequate for handling the complexities introduced by digital and AI-generated content. The reality is that in today's digital age, seeing is no longer believing.
Our legal systems must adapt swiftly to this new reality by reevaluating and potentially redefining what constitutes acceptable authentication practices. This goes beyond merely updating existing rules; it requires a fundamental shift in how we perceive and evaluate digital evidence. The aim should not only be to catch up with current technologies but to anticipate future developments that could further complicate the authentication landscape.
We lawyers need a more rigorous and technologically informed approach to evidence authentication. This involves not only understanding the technical aspects of digital content creation and manipulation but also integrating robust digital forensic practices into the judicial process. By doing so, we can safeguard the integrity of the justice system ensuring decisions are based on evidence that is not just convincing, but authentic.
It is imperative to ensure that our justice system remains fair, accurate, and relevant in an age where digital technology permeates every aspect of our lives. As we continue to navigate these challenges, the dialogue between legal expertise and technological innovation will be crucial in crafting policies that uphold the foundational principles of justice while embracing the inevitable advancements of the digital age.