Westlaw, Lexis, Google Scholar. We all know what those tools are for. Many of you are subscribers to one or more such tools or have bookmarked the free Google Scholar source for legal research. But why? Well, your client has a factual situation they relate to you. Part of your due diligence in preparing arguments, briefs and strategy is discovering other similar legal decisions and their analysis and outcome.
Your AI Injured Client Will See You Now
Soon, if not already, a client will arrive in your office. You will conduct the usual Who, What, Where, When, etc. to begin assessing their case. And, then, somewhere during that conversation, it will occur to you, “was there an AI powered software, machine, decision making, etc. involved here?” Once you begin that assessment, an eventual question will be, “are there any other similar circumstances like this?” Just like searching legal matters in the above resources, The AI Incident Database collects reports of AI related “incidents” from around the globe. It is a 501(c)(3) non-profit which tends to relieve concerns bout bias. (Coincidentally, one of the categories of incidents it catalogs). You probably want to bookmark that in your browser. Here’s why.
What The Heck Does A Judge Know About AI Anyhow?
The short answer, they know as much as the average lawyer…which is to say, not that much. As lawyers we are all well accustomed by practice and strategy to be respectful and deferential to judges. All advantage to your client to ensure you have a good working relationship with your judge whether rulings are going your way or not. But, let’s be realistic. Judges are lawyers. In most states, they are lawyers elected to put on a robe and decide stuff. When it comes to technology and AI, the probability is they have no idea what that means and how AI might possibly be in the proximate cause chain of some harm your client suffered. It is not a sign of lack of intelligence. It is why we have expert witness rules in trials. Judges (and juries) often need others to provide information to help explain the operation of systems and their impact on parties in cases.
It’s your job to first know how to identify that AI related harm (if it is involved) and then disclose that to the court in an understandable way. One important way to add to your persuasion in this regard is to…you guessed it, have access to similar circumstances that have occurred elsewhere. Just like Westlaw and Lexis provide you with persuasive legal matters relevant to your client’s case, the AI Incident Database now enable you to search across a variety of categories and incidents to find one or more similar to that facing your client.
Some of the legal issues that could arise or be strengthened by reference to an AI incident database:
Prior knowledge of a party regarding the particular AI risk your client suffered because that defendant has an incident in the database
Prior knowledge of a party because a prominent other party in their industry has a well-publicized AI related incident involving the same software, machine, tool in the incident database.
Regulatory schemes previously unknown to you that were triggered by an incident reported in the database.
Names of persons or entities reference in AI incidents.
Types of harms in general suffered by others in AI incidents similar to the harm your client suffered.
Sheer number of similar AI incidents in general equating to tacit knowledge by the industry in which a party operates.
The Philosophy of the Database
No database is infallible. Especially one claiming to document AI Incidents (what is an incident anyhow in this context?) into various categories with unknown or varying injuries or proximate cause positions.
The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.
Be Parsimonious To Avoid Embarrassment
We all know well the legions of mocked parsimonious responses people have made throughout history to try and get unstuck from sticky situations. Well, before you loudly pronounce in court that X number of incidents have occurred just like this one, consider how the database is created.
“harms or near harms.” Right from there. What is a harm? After figuring that out, what is a “near harm?” The names and qualifications of those involved in making those decisions are here.
The Editor’s Guide which provides definitions for the key terms underlying the decision-making for inclusion in the database is here.
The database accepts incident reports from, well, anyone. It appears that there is some review/vetting of those incident reports before they are entered into the database, but the rules regarding what makes it in and what does not are not on the website. There is a feature called “Incident Report Submission Leaderboards.” It lists the identity of persons who have submitted the most AI incidents. Apparently, there is no recognition that having such a promotional leaderboard tethered to the number of incidents reported might drive perverse incentives. For example, persons wanting to be listed on or increase rank on such a leaderboard could be incentivized to inundate the database with reports of incidents that begin to push the margins of harms or “near harms.” A question for another day.
This word cloud provides a snapshot of the types of harms the database is already tracking.
For a “leaderboard” of entities appearing as those that deployed the AI which caused a harm or near harm appears here. This is also a useful quick reference for whether your client's matter involves a frequent flyer for AI incidents. The organization is careful to list all their data as “alleged” deployer of AI and “alleged” injury caused, etc. This indicates the database is not deploying investigate resources to somehow determine the legitimacy of the reports beyond what is provided by the submitter.
The Submission page is here. The input fields request data that appears to seek information that is publicly available online about a potential AI incident.
The Editor’s Guide
For purposes of submission to the database, an AI incident is defined as “an alleged harm or near harm event to people, property, or the environment where an AI system is implicated.”
The examples of AI System in use by the database is useful:
A self-driving car
facial-recognition software
Google Translate
A credit-scoring algorithm
In a previous column we took apart the most prominent features of the recently passed EU AI Act. The database seems to be tracking those same consumer facing concerns about safety, surveillance and bias in these examples.
The definitions also highlight that algorithms themselves have not been traditionally considered AI systems, but they would fall into that category if a “human transfers decision making authority to the system.” (Terminator much anyone?). The example given is “a hospital system select[ing] vaccine candidates based on a series of hand tailored rules in a black box algorithm.”
This also points out something to alert courts to. There is no algorithm ever invented or that will ever be invented that is free from human bias. Every decision made by these tools is at some point influenced humans. The dirty little secret of every search engine (Yes, you Google) and every other kind of tool, (iMessage that seems to not know any curse words and keeps defaulting to the Puritan versions like ‘dang’ and ‘shoot’ despite your clear spelling of, you know, the right word) is that humans are always involved. They are making decisions all the time about what to keep in the training data, what outputs to delete entirely, what outputs to modify, what questions to not answer, what decisions to not make, etc.
Consider self-driving. Humans have to make moral decisions while driving sometimes. Example: You are driving on a narrow road, from behind a car a small child runs out chasing the ball, no time to brake, only time to jerk the wheel and roll up onto the sidewalk where there are two elderly people talking. Or, hit the child. Every driver in that situation decides, collide with who? Well, Elon and his folks at Tesla are coding that decision. They cannot do both, they have to choose. They are making that choice and programming it into millions of cars at this point. (Put aside the fact that millions of drivers and pedestrians have already outsourced such moral decision making to a few really smart engineers at one company). Humans are in the loop, always. Do not allow a lawyer representing someone operating an AI powered software, machine or tool try and fog up the courtroom with some version of “judge, the {tool, software, machine] made the decision, not my client.”
The Harms
The harms considered for submission to the database are a wide-ranging list covering most, if not all, the legal harms your client would face in an AI related incident.
Harm to physical health/safety
Psychological harm
Financial harm
Harm to physical property
Harm to intangible property (for example, IP theft, damage to a company’s reputation)
Harm to social or political systems (for example, election interference, loss of trust in authorities). (Not sure how you might litigate this one to a judgment, but there it is)
Harm to civil liberties (for example, unjustified imprisonment or other punishment, censorship)
The incident has helpful views of the data including “cards” with titles and quick summaries. There are also data categorizations providing quick insight into which categories are leading in incident reports they have recorded.
The value of this tool is not in memorizing anything, but in remembering it as a reference to lead to other resources to verify AI incidents which may lead to persuasive arguments in your client’s matter.