The Doppelgänger Dilemma
Have you ever had that feeling that you saw someone you knew or someone famous, a half face angle and then you kept staring and thought, “wow, they look just like…” The fancy word for that person is a doppelgänger. (Someone that looks just like someone else but are not twins). Well, law enforcement in the city of Detroit is having that issue too. But, in their case, they are arresting people once they have that “you know who you look like?” moment. And, what could go wrong? Turns out, plenty. Is technology to blame? Is too much reliance on technology to blame? Are tech companies over-promising or is law enforcement over-hoping (that’s not a word!) that technology can do what most humans can’t?
“It Wasn’t Me!” (As So Many Criminal Cases Begin…)
But, for at least three matters now converted to lawsuits against the City of Detroit, the claim is just that, “it wasn’t me.”
The Pregnant Robber Who Wasn’t
A man reported a robbery which occurred I view of security camera. Detroit police fed that footage into software called DataWorks. DataWorks sells facial recognition software systems designed to enable the identification of persons appearing in such videos. Many completely legit uses for such software are apparent. Scanning crowds at high security events (concerts, visits by politicians, planned or unplanned protests, riots, etc). In this case, the police had a video of the incident and when analyzed by DataWorks they received a hit. That hit was a woman named Porcha Woodruff.
Ms. Woodruff had a previous run in with the law resulting in a 2015 mug shot the Detroit police had on hand being compared to the face(s) seen committing the robbery. But, the police did not merely go and grab her up. Nope. They pivoted to the ever reliable photo line up and victim identification of their perpetrator. Surprisingly, the victim also identified Ms. Woodruff as the perpetrator of the robbery in a photo lineup.
Ms. Woodruff was then arrested and charged with robbery and carjacking. She was required to post a $100,000 personal bond to secure her pre-trial release. However, a month later, all charges were dismissed by the Wayne County Prosecutor. Why? Well, as it turns out, facial recognition software missed the fact that the perpetrator of the robber was certainly not visibly eight months pregnant - Ms. Woodruff was. So, a bit of a miss there. In addition, the police involved in her arrest also failed to do a basic eyes on the suspect and compare it to the video of the actual perpetrator.
What Do the Civil Attorneys Say?
Shockingly, Ms. Woodruff’s lawyers claims about what was done and not done do not actually conform to what the city of Detroit has yet responded to.
One of their initial claims is that the detective who prepared the affidavit supporting the arrest warrant omitted the fact that the image of Ms. Woodruff used to compare to the robbery perpetrator was 8 years old. The lawyers also claim a much more recent image of Ms. Woodruff, her recent driver’s license photo, was available to Detroit police.
The suit also claims that DataWorks produced 73 possible suspect hits in this matter and there is no information whether each of those was cleared before the arrest of Ms. Woodruff. Helping the lawsuit is that the Detroit Chief of Police admits that the investigator involved violated department policy. Honesty, good. Admission, bad for the city’s defense of Ms. Woodruff’s claim. It looks promising, however, that the Chief underlined that Detroit police policy requires more than just a photo line identification before arresting someone. Perhaps in this case, it is not the technology that is the problem, but the ignored policy.
But, The Science
Unfortunately for companies like DataWorks and law enforcement, a 2020 Georgetown University Report analyzed 20 years of research on facial recognition technology, its applications and misapplications, error rate, etc. The result? They concluded it is not reliable for properly identifying criminal suspects. Is there no proper use of the tool? There is. It’s a winnowing tool. Something to make the daunting task of identifying a possible suspect easier. Instead of a security camera image of someone who is one out of 40,000 living in the nearby area, perhaps the facial recognition camera can predict that it is more likely that one out of these 73 are your suspect. It’s no guarantee, just something to deal with the reality that not all 40,000 residents of the area can be investigated.
Georgetown published a statement with its 2022 report pointing out that facial recognition “may be particularly prone to errors arising from subjective human judgment, cognitive bias, low-quality or manipulated evidence, and under-performing technology” and that it “doesn’t work well enough to reliably serve the purposes for which law enforcement agencies themselves want to use it.”
Bow To The Robot
Psychology will forever play a role with the interaction of humans and AI tools like facial recognition software because of the concept of automation bias.
Two things can be true: Some AI tools and advanced technology already in widespread use by millions of us (the calculator on our phones, GPS, the instant recall of contact information, etc) are superior to the ability of humans to recall and process data. And, many tools currently in use are going to be improperly relied upon with an unearned presumption of accuracy. Reliance on automation, technology, and AI tools is going to increase for rational reasons:
Accuracy and Efficiency: Automated systems, generally speaking, have a track record of high accuracy and efficiency. For example, computer algorithms can analyze vast amounts of data faster and, often, more accurately than a human can.
Consistency: Automated systems provide consistent outcomes. A computer doesn’t get tired, distracted, or emotional and more often provides uniform results under the same conditions.
Economic Incentives: Automated systems can often be more cost-effective in the long run than human operators, particularly in repetitive tasks.
Ease and Convenience: With technology becoming more user-friendly, people find it easier to rely on automated guidance. Whether it's GPS for directions or a recommendation engine for movie suggestions, automation offers convenience.
What Could Go Wrong
Over-reliance: Trusting automated systems blindly can lead to neglect of other sources of information, including one's own judgment or intuition. As a result, if the system is faulty or offers an incorrect suggestion, the user might not question or challenge it. (See Detroit law enforcement issues above).
Skill Degradation: Over time, if human operators continue to rely on automated tools, the insights of a presumably smarter AI tool, their own skills will atrophy from disuse. Just consider how many places you regularly go that you could no longer efficiently navigate to due to reliance on your phone’s GPS.
Liability Issues: When mistakes occur due to automation bias, determining responsibility can be complex. Is the user at fault for not intervening, or is the technology provider to blame for a system error? Another consideration for litigation like that mentioned above is whether DataWorks improperly marketed its tool claiming capabilities it actually lacked or improperly trained staff on its limitations. I am sure these matters will be litigated in this and other cases going forward.
Complacency: With automated systems taking care of tasks, individuals will become complacent, assuming the system will handle everything perfectly. This complacency will inevitably lead to decreased vigilance. Again, with the example of GPS. When Apple began its foray into a map tool for IOS, it had well documented examples of users driving into ditches, rivers, etc. just following the GPS prompts which were woefully inaccurate. Those problems have long since been resolved, but the reality remains that some segment of society (and maybe all of us given the right conditions) get used to relying on such tools and eventually become distracted or overly-confident in their reliability.
The Skin Tone Problem
In addition to the problems above, facial recognition tools have been studied with regard to their accuracy dependent upon the skin tone of the person being targeted. Existing facial recognition tools have been shown to more likely generate false positives with black faces. This should not be a surprising result from a photographic point of view. People with dark skin in a given image (often poorly lit security footage) are always going to provide fewer points of identification/recognition. A poorly lit image of someone of any skin tone is more likely to result in a false positive than a well lit studio image of the same person. The technology may eventually advance to account for this reality, but until it does, this problem will persist.
Ms. Woodruff suffered from her wrongful arrest including being hospitalized for dehydration. One of her civil attorneys, Ivan L. Land, was interviewed by the New York Times stressing that something more than just a facial recognition tool match should be required before an arrest is approved. “It’s scary. I’m worried,” he said. “Someone always looks like someone else.” Indeed.