Artificial Intelligence (AI) has and will continue to permeate every corner of our lives. From smartphones to workplaces and public spaces, AI has simplified countless tasks and brought about innovations we couldn't have dreamed of a few decades ago. Nowhere, perhaps, is this more evident than in the realm of law enforcement.
In our quest for safety and justice, AI offers an array of enticing tools that can aid in everything from predictive policing to facial recognition. It has the potential to make our communities safer, our responses quicker, and our justice system more efficient. Imagine a world where law enforcement can pinpoint crime hotspots before incidents occur (I know, I know, Minority Report coming alive), or where facial recognition can identify a suspect within moments. It sounds like the future we've always dreamed of, right?
Well, hold on to your tin-foil hats, because, like any sword (I am catching up to most of you currently just into the second season of Game of Thrones), AI in law enforcement isn't all rainbows and robocops. As much as it promises to revolutionize crime fighting, it also presents a minefield of civil rights and privacy concerns that have emerged in various ways and even more going forward. Could predictive policing lead to unfair profiling? Does constant surveillance by AI tools infringe upon our right to privacy? And how accurate are these AI systems really? Could reliance on them lead to innocent people being convicted, or guilty ones going free?
In this post, we're delving into these questions, exploring the tremendous benefits and potential pitfalls of using AI in law enforcement. We'll take a look at real-world examples and examine current research on both sides of the debate. So buckle up, dear reader, because we're about to embark on a journey into the brave new world of AI-powered law enforcement.
First, A Few Real World Examples
The Chicago Police Department and PredPol: The Chicago Police Department implemented a predictive policing software called PredPol. The system uses algorithms to analyze historical crime data and predict potential future crime hotspots. Officers are then directed to increase their presence in these areas in an effort to deter potential criminals.
New York Police Department and Facial Recognition: The New York Police Department has been using facial recognition technology to identify potential suspects from surveillance footage, driver’s license images and other sources. The system can compare a photo of an unknown person to a database of known individuals (such as mugshots or driver's license photos). In one notable case, the NYPD used the technology to identify and apprehend a suspect in a pressure cooker bomb incident in 2016.
Los Angeles Police Department and Palantir: The Los Angeles Police Department uses software from Palantir, a company known for its powerful data integration and analysis tools. Palantir's software enables LAPD officers to integrate vast amounts of disparate data (such as license plate readings, arrest reports, and field interviews) into a single, searchable platform. The system's AI can help identify patterns and connections that would be difficult for a human analyst to spot, aiding in investigations.
Some Common Current Uses of AI in Law Enforcement
Predictive Policing: AI analyzing vast amounts of crime data, identifying patterns and predicting future hotspots. The technique can focus on factors such as the type of crime, location, time, and even sociodemographic factors. This assists law enforcement in deploying resources efficiently, potentially preventing crime before it occurs.
Facial Recognition: Like in the New York City example above, AI algorithms are increasingly capable of identifying individuals from video surveillance or photographs, making them powerful tools in criminal investigations. These systems can sift through millions of faces in seconds, cross-referencing them with criminal databases, potentially leading to quicker apprehensions.
Text Analysis and NLP (Natural Language Processing): Police receive numerous reports daily, which are often unstructured text data. AI can quickly analyze this information, extract relevant information and recognize patterns or links that could be overlooked by humans or are even too complex to be detected by humans.
Social Media Analysis: Long before some suspects become suspects, they create a years long trail of activity, connections to others, endorsement of ideas, projects, organizations, etc publicly available to all viewers of the relevant social media platforms. AI can analyze public social media posts to identify potential threats or criminal activity. Advanced machine learning models can identify harmful or suspicious behaviors by analyzing text, images, and even patterns of online activity. Many social media companies retain this information even after users deactivate their accounts enabling law enforcement to send subpoenas for this information during investigations.
Gunshot Detection: AI-powered systems like ShotSpotter can identify and locate the source of a gunshot within seconds. These systems rely on an installed network of audio sensors spread across portions of a given area. These sensors can then triangulate the origin of a gunshot based on the time it takes for the sound to reach each sensor. Quick response times to confirmed gunshots at a particular location can potentially save lives and aid in capturing shooters.
Illegal Drug Sale Data Discovery: AI can identify illegal drug sales on the dark web by analyzing patterns, keywords, and user behavior. This can aid in stopping drug trafficking and identifying potential drug producers and distributors.
Traffic Analysis: AI tools can analyze traffic data and identify suspicious behavior such as frequent speeding, potential drunk driving, or hit-and-run cases. Advanced algorithms can analyze CCTV footage to identify vehicle types, license plates, and even specific individuals, assisting in investigations.
Crime Scene Analysis: AI can aid forensic investigators by analyzing crime scene data. It can help predict the weapon used based on the injuries, simulate the crime based on the evidence and the surroundings, or even identify suspects by analyzing fingerprints or other evidence left at the scene. 360 degree cameras have been in use for more than a decade enabling users to setup the device on a tripod in a given room and the system captures high resolution images of the entire room including precise measurements of the size of all objects and their relative position in relation to every other object in the room.
Fraud Detection: AI systems can analyze financial transactions on a massive scale, identifying patterns that suggest fraudulent activity. These tools can alert authorities to potential instances of money laundering, identify theft, or other forms of financial crime. This kind of AI has been in use by banks for years resulting in Suspicious Activity Reports regularly being provided to the federal government for further investigation.
Risk Assessment: AI can analyze data on arrested individuals to assess their risk of reoffending. These assessments can inform decisions on bail, sentencing, and parole, helping to ensure that high-risk individuals are monitored closely, while low-risk individuals get the support they need to avoid reoffending.
And, On The Other Hand….
Predictive Policing: While it can enhance efficiency, predictive policing could (and has) also led to biased profiling. These kinds of algorithms can inadvertently reinforce existing biases in the data. For instance, if historical data shows a high occurrence of crime in specific areas, the algorithm might over-prioritize these areas, leading to over-policing. If the data used to train the algorithm contains a disproportionate number of suspects from a given demographic, social, racial, economic, ethnic, etc, the resulting output of that algorithm will reflect the bias in the training data.
Facial Recognition: Facial recognition technology raises significant privacy concerns. Erroneous identification can lead to wrongful arrests, and constant surveillance could infringe on people's right to privacy. These systems have the ability to record the public movements of non-suspects ostensibly not monitored unless an event occurs meriting investigation. However, the details are in the details. How will such data be stored? Who can access it? Is a subpoena or search warrant required to access it? How long is it retained? Can citizens petition to have their data removed from the system? Can the public request copies of the data in public records requests?
Text Analysis and NLP (Natural Language Processing): Despite its potential benefits, NLP can be invasive if used to scrutinize private communications without consent or adequate legal oversight. It might inadvertently target innocent citizens and infringe on their right to privacy. As the best selling book Three Felonies A Day posits, anyone subjected to sufficient scrutiny can be found to have violated some common or obscure federal or state law. Does analyzing everything you type, read, post, etc seem like it might be a tad intrusive in a free society?
Social Media Analysis: Analyzing social media posts without explicit user consent can violate privacy rights. There's also the risk of misinterpretation of online behavior or language nuances, potentially leading to wrongful suspicion or accusations. Two pending lawsuits which have been the subject of recent posts here underline the possibility that using publicly available content without permission might constitute a copyright violation. Although, it is difficult to see how that argument would succeed in excluding such content from a criminal case.
Gunshot Detection: While it can help identify gun violence quickly, the audio surveillance required for these systems can infringe on privacy rights, potentially capturing private conversations within homes or businesses. The decision on where to set these monitors also has implications for disproportionate targeting of particular neighborhoods.
Traffic Analysis: While AI can help improve road safety, constant traffic monitoring can lead to mass surveillance. Also, the misuse of this data could lead to unwarranted tracking of individuals’ movements, violating their privacy rights.
Crime Scene Analysis: Despite helping solve crimes, if such technologies are used without proper validation or oversight, they could lead to false positives, resulting in wrongful accusations or convictions. There was just a story in the news recently about a well known expert, Henry Lee, being hit with a multi-million dollar judgment for failing to properly analyze crime scene evidence and produce fraudulent results.
The Future Is Easily Predictable - Unknown
Photography, Facial Recognition, Textual Analysis and more are going to be used by law enforcement. The balance between safety and freedom is always in flux. AI tools do not represent a new concern for that balance, they represent new tools as part of an age old concern of where to draw the line between safety and freedom. Of course, ultimate safety leads to imprisonment while ultimate freedom leads to anarchy. Somewhere in the middle AI tools will operate focused on safety and in ways small and perhaps large affecting our civil rights. The solution is not a static rule, to be sure. But, it is likely some rule as a starting point that we can iterate over through time to try and find the right balance.