Retail theft has been a concern of retailers since the dawn of retailers. Presumably, owners of small general stores in small town America in the 1800s used intuition borne of experience as a “stranger” entered their store ostensibly to purchase something. Likely an imprecise, albeit coincidentally accurate at times method, being operated by a human using a human brain in which bias would be evident.
Into the ever more technological attempt to detect and stop shoplifting comes AI. But, AI what? Faster cameras? Video analysis to detect sleight of hand deposits of products into oversized coat pockets? Identifying a customer visiting their fourth Rite Aid store in the past 6 hours looking, you know, “all suspicious?” Rite Aid made their move and they found out.
The Rough Patch
Apparently, Rite Aid ran into a bit of a financial rough patch and filed for Chapter 11 bankruptcy in October of last year. Not sympathetic to their plight, the FTC filed a complaint against Rite Aid while in bankruptcy regarding their use of anti-shoplifting AI deployed in their stores.
Specifically the FTC alleged in their complaint that Rite Aid engaged in “unfair acts or practices in violation” of the FTC Act. Well, to the law books to find out what the heck FTC Act is regulating.
Section 5(a) of the FTC Act prohibits “unfair or deceptive acts or practices in or affecting commerce.” Uh, well, that is a universe of whack-a-mole possibilities now isn’t it? Rite Aid found one.
The Resolution
Analogous to parenting, Rite Aid agreed to go into AI timeout for five years and the bankruptcy court approved the resolution. But, for what? Well, the limitations in the agreed order provide some insight:
They are prohibited from using “any Facial Recognition or Analysis System.”
Required to not only delete any information gathered by those systems but also reach out to any third-parties who could have retained copies of any of that data and ensure they also deleted that data. (Sort of highlights that these systems even when used by a particular owner or purchaser might well be spreading the gathered data all around as a normal function).
Rite Aid’s AI System
Their system created a database of persons they identified as involved in attempted or actual criminal activity in stores. The database included images, descriptions and other details of the suspicious activity of the identified person. Rite Aid instructed store security to add as many people to the database as possible. It eventually included more than 10,000 entries.
The database was not used to predict future criminal behavior by others. Instead, it was used to identify that person when they re-entered a store. Security was told that if a person entering the store matched a record in the database, they should approach that person and ask them to leave. If they refused to leave, the police were to be called.
The FTC complaint focused not the claim that the database of images with uneven quality and other potentially errant data was misidentifying customers as being persons in that database. This misidentification resulted in legitimate customers with no actual history of attempting to steal from the store being escorted out of stores. So, the order seems to stay to Rite Aid…shut it down.
But, all is not as it seems. The order then goes about providing Rite Aid permission to deploy a AI system with a list of monitoring, testing, assessment and reporting requirements. Too voluminous to list here, but summarized briefly:
Testing relating to the rate or likelihood of Inaccurate Outputs
Factors likely to affect the accuracy of the type of Automated Biometric Security or Surveillance System deployed
Documentation and monitoring of the AI system
The methods by which any algorithms comprising part of the Automated Biometric Security or Surveillance System were developed, including the extent to which such components were developed using machine learning or any other method that entails the use of datasets to train algorithms, and the extent to which these methods increase the likelihood that Inaccurate Outputs will occur or will disproportionately affect consumers depending on their race, ethnicity, gender, sex, age, or disability status. This review should include, at a minimum:
Rite Aid’s obligations appear to focus on having an independent third party constantly monitor, measure and document how their system produces results. The order also requires Rite Aid to review the outputs of its system for “any identified risks that consumers may experience physical, financial, or reputational injury, stigma, or severe emotional distress, including in connection with communications of the Outputs to law enforcement or other third parties, taking into account the extent to which such harms are likely to disproportionately affect particular demographics of consumers based on race, ethnicity, gender, sex, age, or disability (alone or in combination)….”
The inference that can be drawn from the order is that Rite Aid’s system(s) were tracking shoplifters and somehow identifying people as either potential or actual shoplifters in a way that resulted in different groups being over-represented in those results. Perhaps particular groups were over-represented in the enrollments in the database to begin with.
As I have written previously, these problems of bias are never going to be remedied. Because all humans have biases, logically, all systems humans create will have them as well. As I was working on this draft, the story of Google’s newest AI image generation tool exploded online. The crux of that story is the obvious bias injected into the outputs of that system over-representing demographic groups in alignment with the values of the AI tool developers/creators. Vikings were citizens of Scandinavian countries. However, Google’s image creation tool, when asked to create images of a Viking, produced what you see below.
The issues that Rite Aid is dealing may or may not be purposeful bias in an AI system. However, government regulation (and we lawyers) are going to forever be examining AI systems for such biases. The question in all of those cases will ultimately come down to whether the disproportionate effect on one demographic group or another in outputs of those systems was a result of intentional tinkering with outputs, poor design or no inherent bias at all.
The interesting philosophical question is whether regulations in this area will make the assumption that members of all groups are involved in activities (legal or not) at the same rate in all areas of all places where people live across all time periods. When I was in grade school, most boys played baseball. Today, I presume most school age boys play soccer or video games. Things change. People’s preferences change. Their biases (baseball versus soccer) change as well.
The Default May Not Be Anyone’s Fault
For lawyers defending the use and operation of these systems, a range of attitudes are going to be deployed in response to bias claims from indignation to apologetic to explanatory. However, the AI systems are here to stay and they are all only going to be more infused in our lives. Reflexively accepting an AI design bias claim solely because the effect of its deployment is to create datasets that are not precisely parceled out in percentages equivalent to those in the targeted community is not the only answer. We are lawyers. Ideally, facts should drive conclusions. Of course, that is not always the way the law works, but at the outset, before you know anything else, know the questions to ask to obtain the information you need.
I Represent A Company Facing An Improper AI System Use Claim
Your client contacts you forwarding plaintiff’s counsel’s communication that their client believes an AI system your client deployed has somehow run afoul of a regulation, statute or the existing federal government Executive Order. Your first move? Ask your client questions. (Not-exhaustive list to get you started)
What is the name of the AI system?
Was it developed in house or commercially purchased/subscribed to, etc?
What was the system deployed to accomplish?
If an outside company, who are other customers using that system? (i.e. any other claims predating the one your client is facing?)
What information was requested of that company by your client before the system was deployed?
What ongoing monitoring of the system was conducted by your client?
Where are the results of that monitoring?
How frequently?
Did it detect any issue with the system before this claim arose?
What state is the claimant from? (States are not enacting their own AI regulations)]
If internally created, who is the point of contact within the company who oversaw its development and deployment?
What data does your client have about the system’s error rate, pre-deployment outputs, etc?
Copies of all communication between your client AI company reps (email, slack, Teams, Weber, etc)
My Client Believes They Were Harmed or Mistreated by an AI System
First move? Again, ask questions.
What is the claimed harm? (We all know to do this, but here, you want to first refer to the kinds of harms that Executive Orders, regulations and local statutes are targeting as they are unique in some cases to AI system deployments)
Is there a chance others were similarly harmed?
Class action possibilities are more important to consider in all of these cases. Why? Because, AI deployments are not going to come into contact with one person, ever. They are systems trained on and designed for the economies of scale accompanying general predictability. Short answer: When AI systems go wrong, they are going to generate hundreds and perhaps thousands of claims. (See Rite Aid above)
Name of the company deploying the system.
Try and determine the name of the company that developed the system if not the same as the one that deployed it.
Speculate about what the system was intended to accomplish.
Is there an innocent explanation for the outputs you can anticipate opposing counsel to raise in response to your eventual claim letter to them?
Are there other claims predating your client’s story?
Just Identifying Bias Is Not Enough
I was speaking with a non-lawyer friend the other day. She asked, “would a lawyer investigate that type of claim?” And, of course, my response was, “
depends.” Likewise here. You do not want to frivolously spend hours investigating an AI based claim only to discover that the system was working precisely as intended or that other previous claims have already been knocked out on some basis that had not occurred to you at the outset.
Bias is Everywhere. Bias Has Always Existed. Bias Will Always Exist
As plaintiff or defense counsel in such a matter do not get too concerned or excited about the viability of an AI injury claim merely because someone involved claims facts support that the AI system was biased.
I would presume younger drivers speed more than older ones, engage in texting while driving more, are more reckless than older drivers. An AI system trying to detect and even ticket drivers for such conduct might reasonably issue tickets disproportionate in that same way. This is not an improper AI system bias. It is a reality of the likely demographic slice engaged in a particular kind of risk tolerant conduct.
The FBI has for decades maintained statistics on the demographics of offenders and victims across a range of common criminal offenses. (See example here). An AI system that detects crime at rates that are not precisely equal across all demographics would not be biased, it would be reflective of this FBI data as it has been present for decades before AI was in everything. The economist and author Thomas Sowell has long pointed out that just as people have differing abilities, weaknesses, skills and blind spots, it is quite illogical to assume that demographic groups would not display differences as well.
But we can at least try to treat [demographic disparities] and other theories as testable hypotheses. The historic consequences of treating particular beliefs as sacred dogmas, beyond the reach of evidence or logic, should be enough to dissuade us from going down that road again—despite how exciting or emotionally satisfying political dogmas and the crusades resulting from those dogmas can be, or how convenient in sparing us the drudgery and discomfort of having to think through our own beliefs or test them against facts.”
― Thomas Sowell, Discrimination and Disparities
People Confront Biases At Their Peril Sometimes
A recent podcast interview of the Harvard scholar Roland Fryer detailed the effect on his career following publication of his study on presumed bias in police shootings. Fryer admits he fully expected what the conclusion of his study would be. He conducted it with 18 different graduate students. He saw the results and was shocked. It revealed a slight bias in the “roughing up” of black suspects by police over other racial groups. However, it also revealed black suspects were less likely to be shot by police than other groups. (See New York Times article on the study here.)
He disbelieved his study results so he did not publish it. Instead, he hired 18 new graduate students to have the entire research done again. Same result. He finally published the results and the blowback was immediate. Why? Because, many critics had Fryer’s same pre-study bias and the results collided with that bias. Fryer accepted the results of the twice-analyzed data. However, the response went beyond criticism. (Which in response to the publication of any study is entirely appropriate as it is the best way to get at the truth). His position as a tenured professor was threatened and the then President of Harvard wrote a letter seeking his dismissal.
Bias is a powerful force. As lawyers raising its effects on our clients or defending the outcomes of AI systems with presumed bias, the recognition that bias exists in everyone is an important first step. The Rite Aid case points out that the most common form of bias is not the unintended effects of an AI system designed without an intended bias. Instead, it is the effects of systems in which humans alter the operation of the AI system and inject bias in one direction or another.
Conclusion
For lawyers, the solutions here are not merely to accept that AI systems will always treat people or groups improperly. Equally the case it is not useful to equate a disproportionate outcome as equivalent to bias. AI systems will have biases. Transparency of what data was used to train the AI system and regular pre-production and post production deployment testing will be good ways to identify bias early on. Beyond that, tracking the outcome results with the input data will also be helpful in determining whether disproportionate outcomes are a result of bias or merely a reflection of younger drivers being more risk tolerant for example.
The public rightly fears the unknown of AI systems making seemingly autonomous decisions affecting healthcare, employment, criminal justice and other important areas of their lives. That fear can be greatly reduced by lawyers asking the right questions and seeking the right information when claims of AI system bias arise. There will be biased AI tools. Some will intentionally be so. (See Google image tool mentioned above). Still others will merely properly operate, but result in a dataset that is disproportionate to the makeup of the community in which they are operating. All true. It is our job to correct AI misalignment with laws, regulations and our values and also defend its operation when mere disproportionate data collection or aggregation is the only support for bias claims.