Just like the advances in AI itself (most specifically LLMs at the moment) are coming with a speed that even AI experts did not accurately predict, so are the calls both in the United States and abroad coming for regulation of AI generally.
A society without rules does not exist. So, the argument to not regulate AI at all seems like a microcosm of this argument for wider society and one that is destined to lose in the end. However, regulating AI in a way that only governments control its advancement, also likely to fail. The question becomes not only what regulations would suitably balance control and avoid hindering innovation, but whether AI regulations can be effective at all. This post is about what is being attempted thus far and how those attempts appear to stake out the outlines for what regulators and the AI industry think are the appropriate regulatory postures to balance those interests.
Who Will Get There First?
Long gone are the days where we in the United States can ignore what other countries regulators are doing in the tech space. Some of our largest U.S. spawned companies (Think FAANG, Facebook Apple, Amazon, Netflix and Google) have to comply with the regulations in all the countries where they operate. So, lawyers and regulators here in the United States have to pay attention to what is happening in Europe, Asia, etc. And, for many of those companies, whoever gets to regulating them first, is likely to drive their internal policies which will constrain their behavior in the United States as well.
The Different Schema
When it comes to regulating any industry, governments can take one of a few approaches. They can wait for products/services to operate in the marketplace, see what evils befall customers/competitors and respond to address those issues. They can formulate regulations in advance of the wider distribution of those products/services and then attempt to revise those regulations once they determine their effect on the market, competition, innovation, safety, etc. Finally, they can lock down the entire industry by constraining them to a licensing scheme as a gatekeeping function preventing any products/services of that type from being offered until they meet specified criteria. (Think healthcare providers, truck drivers, etc).
For AI, however, it seems too late to implement the third option above. AI tools of various types have been used by companies large and small for decades at this point. The first option above seems about where we are in the U.S. AI tools are both in wide use in the back end of companies (e.g. Facebook’s various AI tools to promote posts and win the “attention” of their users to stay on their products for as long as possible each day) and now out in use by even the non-techies. ChatGPT 3 was released less than a year ago and is now a term that is commonly understood in the wider non-tech society. I just received a notice for a CLE program the other day all focused on ChatGPT. Lawyers are in heated debates on Twitter about whether ChatGPT will be their fiercest competitor especially for relatively low value disputes.
Europe Making Its Move
At the end of last year, the European Union was in the later stages of codifying the Artificial Intelligence Act (“AI Act”). The Act’s stated goal is to balance the societal benefits and risks of the use of AI technology. The preamble to the proposed regulations state that it is
“lay[ing] down a coherent, effective and proportionate framework to ensure AI is developed in ways that respect people’s rights and earn their trust, making Europe fit for the digital age and turning the next ten years into the Digital Decade.”
The substance of the 100+ page proposed regulation is too voluminous to outline here, but a summary of the regulations, produced using Python (a common data analysis programming language, OpenAI and a module called Langchain) I designed for that purpose is as follows:
The European Commission has proposed a legal framework for Artificial Intelligence (AI) to promote its uptake while addressing the associated risks. The framework is based on EU values and fundamental rights, and is intended to give people and other users the confidence to embrace AI-based solutions, while increasing human well-being. The proposal seeks to bring economic and societal benefits across industries and social activities by improving prediction, optimising operations and resource allocation, and personalising service delivery.
I am working on designing an open source web application enabling natural language searching of the proposed regulation for even non-lawyers to explore its potential effects. I will send that link out to all subscribers shortly.
What Is the United States Doing?
In January 2023, our National Institute of Standards and Technology (NIST) released a 42 page document entitled the AI Risk Management Framework. The document states as one if its concerns the following:
[AI represents] a uniquely challenging technology to deploy and utilize both for organizations and within society. Without proper controls, AI systems can amplify, perpetuate, or exacerbate inequitable or undesirable outcomes for individuals and communities. With proper controls, AI systems can mitigate and manage inequitable outcomes.
An AI powered summary I created using the tools noted above is the following:
The National Institute of Standards and Technology (NIST) has released the Artificial Intelligence Risk Management Framework (AI RMF 1.0) to help organizations manage the risks associated with AI. This document outlines four core functions (Govern, Map, Measure, and Manage) and six profiles, as well as four appendices that provide descriptions of AI actor tasks, how AI risks differ from traditional software risks, AI risk management and human-AI interaction, and attributes of the AI RMF. It is available free of charge and will be updated periodically.
I have also included this document as part of the above natural language searchable web application design. While the regulations present lofty goals for the continued innovation of AI tools, they also leave much to the future to be decided as to how the government intends to bring about its goals while not choking that innovation.
The framework identified four key goals for the regulation of AI systems and tools:
Organizations must cultivate a risk management culture, including appropriate structures, policies, and processes. Risk management must be a priority for senior leadership.
Organizations must understand and weigh the benefits and risks of AI systems they are seeking to deploy as compared to the status quo, including helpful contextual information such as the system’s business value, purpose, specific task, usage, and capabilities.
Using quantitative and qualitative risk assessment methods, as well as the input of independent experts, AI systems should be analyzed for fairness, transparency, explainability, safety, reliability, and the extent to which they are privacy-enhancing.
Identified risks must be managed, prioritizing higher-risk AI systems. Risk monitoring should be an iterative process, and post-deployment monitoring is crucial given that new and unforeseen risks can emerge.
While it makes sense to require companies releasing AI tools to the public to have vetted their safety ahead of time, the complexity of continuing to monitor their use once released may prove daunting. In just the last six months, numerous tools have been spawned from the widespread use of ChatGPT. Those include an open source project called AutoGPT which has received more than 100,000 stars (developers voting that the tool is of high value to them in their work) in less than a month. As with so many technological innovations, no one company, no matter its size, has the capacity to predict the thousands of possible uses of their product. It is the same with AI tools like ChatGPT.
Numerous videos on YouTube reveal relatively easy means of navigating past its supposed controls on receiving advice on how to build bombs, how to poison large numbers of people and the like. One user was able to avoid the initial refusal of ChatGPT to create an article about the benefits of humans eating glass. ChatGPT eventually confabulated an entire article with citations to prove that humans eating glass was healthy and potentially beneficial. The propagation of misinformation of this type at scale is an inevitable part of our future. Will it render the entire Internet useless given that within the next few months it will be impossible to believe anything that you read, see or hear online given the ability of AI tools to create apparently reliable content that is entirely manufactured? We are all going to find out.
The FTC is also in the AI regulating game more aggressively now. One of their more prominent proposals is the regulation of AI tools designed for surveillance. We have already seen other countries, most notably China, adopt widespread surveillance tools on nearly every street corner in every major city. While that seems like an impossibility in a culture like ours with its history of a healthy respect for freedom, it is only a decade or so from the disclosure by Edward Snowden of widespread NSA surveillance of the American public. A federal court eventually found the program illegal. As of this writing, Mr. Snowden is a fugitive living abroad and the NSA surveillance has not stopped.
Another area of focus has been the deployment of AI tools for credit scoring and lending. The outcomes of the decisions of AI tools are going to be scrutinized for any perceived biases. Once such biases are detected, companies producing those algorithms are going to be expected to provide transparency regarding the operation of their algorithm to prove no inherent bias in its responses. Is that even possible? Lawyers will need to get up to speed on these issues to effectively argue either way in response to such enforcement actions.
A list of regulatory areas currently in the legislative process:
The EEOC is concerned about AI tools used for job screening.
Other proposed laws would seek to prevent bias in healthcare, (Health Equity and Accountability Act of 2022 (H.R. 7585)), including “bias audits” (the creation and fair implementation of which is yet unknown)
AI tools in insurance
AI tools used for lending decisions
The EU is also seeking to pass legislation that would enable private actions against companies that have harmed citizens via AI tools
While AI seems like a frontier open to innovation, advancement and some risk, the regulations both state and federal that are likely to be enacted in the next 12-24 months will certainly keep lawyers, AI developers and companies busily trying to navigate while building valuable services and tools.