AI Risks, Safety and Ethics
A Legal Perspective on the Challenges and Responsibilities in Artificial Intelligence
Innovation With Responsibility
Facebook in its early days was renowned for personifying its philosophy of “move fast and break things.” It was a casual way to describe a philosophy with serious consequences. It helped Facebook vault ahead of competitors and also put it in the crosshairs of privacy advocates and regulators generally. The field of AI and many of its most prominent and powerful developers (think OpenAI, Microsoft, Google, and behind a bit, but also Amazon) are in a similar position. They are certainly not speaking out loud that early Facebook philosophy, but their desire to hurtle toward the next GPT X.0 development is causing many inside AI and out to wonder, what could possibly go wrong here.
What Could Possibly Go Wrong?
In two recent posts we summarized both the current EU AI regulatory approach and the U.S. guidelines for AI embodied in the Biden Administration’s recently Executive Order. They are worried about things. Those regulations make that clear. But, from a technical and practical standpoint, what are they worried about.
To start with, the problem AI poses for society is the capacity to have it infused everywhere. The prominent areas of our society, law enforcement, surveillance, information gathering and processing, education, employment and healthcare cover everyone in the country. They cover everyone in every country. To say that a few wrong answers from ChatGPT is not something to worry about misses the point.
AI capabilities are already part of so many everyday events we take for granted. Tesla vehicles have had AI embedded for years, and advertising algorithms on the internet have been using it even longer. Predictive analytics are part of sports, finance, healthcare rationing, border security, policing etc. There is no push to go backwards to a “simpler time” when these AI tools were not yet ready for prime time. Just imagine the wave of AI related litigation to come. The few class action lawsuits by copyright holders is merely the beginning.
Privacy, Transparency and Accountability
Imagine how effective you would be in a conflict with a family member if you remembered every conversation you had with them, as well as every conversation they had with others. Oh, the contradictions and hypocrisies you could remind them of. For those unconcerned with privacy, they might ask who really cares if a person has a book with all that information? l Finding a specific reference without technology would be cost prohibitive for a person’s time. Finding interrelated semantically similar passages in that life diary, equally impossible manually. AI makes finding that information and more, nearly instantaneous. AI powered applications have long since passed the ability of humans to process that breadth of information. (It has often been rumored, but never confirmed, that humans have never been able to fly the B2 Stealth Bomber by themselves because its design is so radical, it requires AI microsecond decision-making to keep it aloft). Have we lost all privacy already? Does the 20-30 year old generation even care about privacy? Regulators are keen to focus on the privacy of individuals in their own information, the data about their movements in public, purchases, websites visited, etc. Undoubtedly, of the 100 million ChatGPT users many are depositing private information, in the form of queries and uploaded files, seeking answers from the world’s most capable LLM (so far). OpenAI has a privacy policy, but, it only applies to U.S. customers. It has a separate policy applicable elsewhere. The policy alerts users that information they upload, in the form of queries, etc. is retained and used by OpenAI. Users also agree that their information can be transferred (read sold) to third parties. Its policy is a default, opt-in policy. Meaning, you are sharing everything unless you opt-out. Perhaps a more user friendly protection of privacy is mandating that AI developers adopt an opt-in policy? Meaning, unless a user takes an affirmative step to agree to permit retention of their information and use by a developer, it is never retained. These decisions affect innovation, to be sure, and privacy. Trade-offs, always trade-offs. No solutions.
Transparency
Why is it a risk if AI tool developers are not transparent? Well, what training data was used to build their model? Where was it obtained? Was it obtained legally? Does it contain illegal content? What modifications to outputs have been implemented in the design of the model catering to the philosophy of that AI developer? What did they think was a biased response so that they trained the model to output “global warming is a hoax” or “global warming is an emergency and we must act now!” Knowing how models are being trained, modified, etc. affects companies and users relying on them. The EU AI Act does not appear to give any liability break to companies deploying an AI model they are purchasing or subscribed to that ends up acting out an unlawful bias. To say, well the company that developed the AI tool is liable for the bias not us is not going to work. A company, government organization or university that is shown to have engaged in unlawful discrimination arising from reliance on an AI tool will not be able to shield themselves because they did not produce the original AI tool. Simply to minimize/avoid legal liability, companies will have to insist that AI developers are transparent about how their tool was developed.
The newest Barbie doll can interact with children using AI. Is it recording their comments? It is saving that data? Does its retention of that data enable the toy to change its responses catering to your child’s personality? Has it been developed to try and shape your child’s worldview by its responses? No one knows and Mattel, the maker of Barbie, is not saying. Yet, the toy is being pushed into the market. Nowhere else do we permit products to be placed into stores without some minimal safety testing.
Accountability
This is where we come in. Things will go wrong. They already have gone wrong. (IBM list of AI bias examples). The question, as with analogous legal issues, will be: Who is liable? But, for a lawyer to understand how to determine where liability should lie and to repel arguments from various defendants in the causal chain, you have to understand how the AI tool was developed. It would also be helpful to know the applicable regulations and understand whether they were violated in the development or deployment of that tool. Regulators will not catch all such violations. It should be one of your first investigations to determine whether the AI tool was developed consistent with applicable regulations. Most will not be. Start ups are often still “moving fast and breaking things” along with not having the money to pay for a person to monitor AI risk.
Bias
We have touched on this issue in different ways in previous posts. The bottom line is that bias is a content-neutral word. Your notion of biased information may be another person’s notion of truth. But, there is no doubt that every AI tool ever produced and ever to be produced will have to grapple with which way to turn the bias dial.
The EU AI Act makes clear that AI developers cannot merely test and ensure their tool is properly aligned with the regulations and then release it for use. In addition to that, they must provide ongoing testing information to regulators to ensure their tools while in use by customers do not somehow generate outputs or features that violate the regulations.
AI Is Not Bound By Truth
Many non-technical folks have understandably reacted to ChatGPT outputs and those of other models in amazement. The ability of LLMs to predict the next word and do so nearly instantaneously confuses users into thinking that LLMs know something. That is, LLMs can answer questions guiding you to the truth and away from false information. LLMs, as you must preach to all your clients, know nothing. They certainly do not know the truth. They only know what the likely next word is. They do not even know the meaning, as we humans do, of the words they output. They simply know the math which predicts, based upon the training they have undergone with data, the most likely next word.
AI is not bound by any notion of what is true. They provide mathematically likely words in sequence. But, they do not provide truth. Therefore, all AI tools will be modified by their developers who will decide what they think is true. There were many early examples when ChatGPT 3.0 made its debut. Users were showing how it could be prompted to tell a joke about some politicians, but would politely decline to tell a joke about others. It could prompted to write a laudatory piece or prose about a certain world event or group cause, but then decline to write a critical essay about that same topic. LLMs do not make such decisions. They are modified to act out the values of the developers who created them. Some have posited that the future of AI tools might actually heighten cultural tribalism. Organizations, governments and companies may well be able to select LLMs for example, that have been fine-tuned to provide answers aligned with certain political, religious or other positions while denigrating the opposition. Cable television and news websites have already shown there is a market for competing versions of the truth. There is no reason to suspect that AI developers will ignore these markets and their preferences.
Old Jobs Disappear, New Jobs Emerge
Much has been and will continue to be written about the economic upheaval AI will generate. No one can predict just how it will play out. Jobs once done by humans (initially knowledge worker jobs as it turns out) will increasingly be augmented (at first) and then overtaken eventually by AI tools. However, consider encyclopedias as an example. Consider the chain of employment from the factory where the physical books were assembled, trucks to ship them, sales people to sell them, writers, editors and others to author, update and design them. And, consider that 50 years ago there were several companies competing in that market with thousands of people’s livelihoods dependent on that industry. Those are all long gone. Thousands of good paying jobs just disappeared. Everyone in that industry or who would have been in that industry somehow went elsewhere for employment. Some of the AI displacement will be like that for sure. How much, no one can accurately predict.
One job that will arise everywhere is an AI safety and risk manager. Companies developing AI tools and organizations deploying them will both have to develop teams, never one person, to monitor their usage. Organization risks will include economic, repetitional and legal. Those risks are never going back to where they were five years ago. They are here to stay because AI and what it will bring to the legal field and elsewhere is here to stay.