Hypothetical Horizons: Unraveling Liability in the AI Era
How to Analyze Liability for a Hypothetical Autonomous Military Drone Misfire
Not only are lawyers using (and in some cases misusing or negligently using) LLMs and other AI tools, but the world around is is increasingly filled with AI powered tools. My college aged daughter was recently picked up by an Uber in Arizona and in her terse, but sufficiently explanatory way, narrated this short video with her one word reaction to being a passenger in a car driven by AI.
The military first landed a full autonomous drone on an aircraft carrier in the ocean (considered by pilots as the most difficult landing in aviation) more than 9 years ago!
These systems are all around us. For lawyers, the question of liability when one of these systems goes wrong will not be like analyzing vehicle collision liability or slip and falls in the grocery store. The layers of persons and entities involved in what becomes an eventual in-production use of autonomous AI involves potential modifications to systems, rules, parameters (discussed below) and other characteristic of those systems.
Examples of autonomous AI systems in use today:
Virtual assistants like Amazon's Alexa, Apple's Siri, Google Assistant, and Microsoft's Cortana utilize AI technologies to understand and respond to user queries, perform tasks, and control smart home devices.
Recommendation Systems: Platforms like Netflix, Amazon, and Spotify employ AI-based recommendation systems to personalize content recommendations for users. These systems analyze user behavior, preferences, and patterns to suggest relevant movies, products, or music.
Fraud Detection: Financial institutions employ AI tools to detect and prevent fraud. These tools analyze transactional data, identify suspicious patterns, and flag potential fraudulent activities, helping in risk assessment and fraud prevention. They also use autonomous tools to make credit decisions.
Healthcare Diagnostics: AI tools are used in medical imaging, such as interpreting X-rays, MRIs, and CT scans, to assist doctors in diagnosing diseases and conditions more accurately and efficiently.
Industrial Automation: AI-powered robots and machines are used in manufacturing and industrial processes to automate repetitive tasks, improve efficiency, and enhance productivity.
Content Moderation: Social media platforms and online communities utilize AI algorithms to automatically moderate and filter content, identify and remove inappropriate or offensive material, and combat spam and fake accounts.
To illustrate how to start thinking about liability in the AI era, we will consider the following hypothetical: A family comes to your office for an intro appointment. They present a personal injury matter, a death of one of their loved ones. A middle aged man with a wife and three children here in the United States was traveling outside the country. One day on the way to get coffee, a large explosion occurred directly on top of him from what was later determined to be a lethal weapon launched by a U.S. military drone. He and several other locals were killed instantly and multiple nearby buildings were leveled to the ground. The family knows you specialize in determining liability for personal injury matters. After all, your billboards and advertisements are seen all over that part of your state. Now, let’s dig into how you might assess liability for this accident and advise this family.
The Study Before The Initial Conference
With the increasing integration of artificial intelligence in various domains, it is crucial to establish clear frameworks for liability and accountability. The unique characteristics of AI, such as its ability to learn, adapt, and make decisions autonomously, pose new challenges when determining liability for incidents. Questions about whether an AI system was part of the potentially chain of causality, was it capable of autonomous deployment, its use in other domains, history of testing, error rate (Evid. R. 702 anyone?), etc.
Overview of the Incident
In our hypothetical scenario, an autonomous military drone, equipped with advanced target acquisition systems and weaponry, was deployed on a reconnaissance mission in a conflict zone. While conducting its mission, the drone mistakenly identified and targeted an innocent U.S. civilian instead of the intended hostile combatant. Were there any human operators “in the loop?” If so, what are the records of what they did or did not do as the moment to deploy the lethal weapon approached? What about if despite attempts by the human operators to intervene and abort the mission, the drone's autonomous decision-making capabilities failed to respond, leading to the tragic consequences of the wrongful targeting. Can the AI implemented in lethal weapons systems ever develop its own goals? And, if it develops its own goals, can it work to override attempts by human operators to deter the drone from a goal it is determined to reach?
The Mistaken Targeting
The wrongful targeting in this hypothetical is unlikely to be a result of just one mistake. More likely, a series of errors combined to result in the death of the family member of the persons in your office. In fact, most aircraft accidents are determined to be a result of a series of errors not merely just one, even in pilot error determinations.
Was the algorithm that mistakenly identified the target faulty? Was it fine-tuned by a private company or the military itself? Was it prone to hallucination in its response to conditions leading up to the decision to deploy the lethal weapon? Were the human operators inadequately trained? The circumstances surrounding the incident may involve challenging environmental conditions, such as low visibility or obscured identification markers.
The Military Conduct
The military member operating the autonomous drone plays a significant role in the incident and may bear some liability. The operator's responsibilities might include overseeing the drone's operations, monitoring its actions, and making decisions based on the information provided by the drone's systems. Those responsibilities could be spread between multiple individuals. What are the protocols for the deployment of autonomous aircraft and their weapons systems?
How are autonomous driving systems programmed to avoid the stereotypical child chasing a ball into the roadway? What if the only option is for the vehicle to swerve onto the sidewalk where its vision system can clearly detect other people? Is it programmed to collide with the person improperly entering the roadway (despite it being a 4 year old?). Or, what if the vehicle assesses the sidewalk folks are elderly and the person entering the roadway is 4 years old and it is programmed to collide with the older people? A utilitarian analysis. These are questions to ask.
It is crucial to investigate whether the operator followed proper protocols and what those protocols are. Did the operator exercise reasonable care? What is the standard for reasonable care when humans are intertwined with AI systems capable of autonomous operation? Did the operator make objectively reasonable attempts to prevent the errant targeting? What was the operator’s training, experience, adherence to standard operating procedures, and response to the malfunctioning drone?
Despite the rules the military used, as an entity it is responsible for deploying and utilizing autonomous drones while exercising due diligence to prevent harm to innocent civilians. This duty should include ensuring the operators are adequately trained, the drones are maintained in proper working condition, and there are safeguards in place to detect and rectify errors or malfunctions. Evaluating this is not only something that you should do in representing the deceased person’s family, but also in anticipation of others in the causal chain doing the same seeking to shift liability elsewhere.
These Systems Are Never Built By Just One Entity
Private companies as well as the military rely on a system of supporting companies to provide pieces of what become a complete autonomous AI system. Just in vehicles alone for example, hundreds of sensors, relay systems, notifications, etc. come together to provide the autonomous action needed to safely drive the vehicle.
The defense contractor involved in manufacturing the autonomous drone has a fundamental duty to design and produce systems that are safe, reliable, and fit for their intended purpose. This duty would include conducting thorough research and development, employing robust quality control measures, and adhering to industry standards and regulations. (Again, given this is a fairly new field, what are the industry standards?). The contractor must ensure that the drone's software, hardware, and components are carefully designed and integrated to minimize the risk of errors, malfunctions, or unintended consequences. Were these systems tested? Where is that data?
Defects At the Outset
If the incident resulted from a defect in the design, manufacturing, or labeling of the autonomous drone, a product liability claim against a number of persons/entities in the causal chain may lie. Product liability claims typically involve establishing that a product was defective, the defect caused the harm, and the victim suffered damages as a result. In this case, the defect may arise from errors in the drone's software algorithms, faulty hardware components, or inadequate safety mechanisms. These defects could have emerged at the time of production or later in time with fine-tuning or other modifications to the system post-production/delivery to the military.
The Autonomous AI and Open Source
The federal government issued an Executive Order just two years ago mandating that all federal contractors who provide software as part of those contracts also provide a document called and SBOM (Software Bill of Materials). That SBOM is supposed to be a list of all the software components contained in that software. This is then a tool for the government to assess the number and seriousness of any hacker vulnerabilities in that software. What evaluation of SBOMs reveals is that all companies, even the largest and most sophisticated, release software that contains open source components. These components can be regularly maintained and minor part of that release or they can be old, poorly or never maintained, critical components for the operation of the larger software package.
The software components of an autonomous system, including the drone concept in our hypothetical, will involve many other smaller components. Reviewing the required SBOMs provided to the government as part of the contractor’s obligation may reveal the use of and reliance on proprietary or open source tools with known vulnerabilities. You never know until you look and you need to know to even ask and where to look.
Did the AI company/contractor providing all or part of the software used in the drone system conduct thorough testing and validation of the software? Did they try to break it by creating what would be unlikely, but possible circumstances in the operation of those systems? Potential claims against the AI company could include product liability, negligence, or breach of contract. You might need to demonstrate that the software contained defects rendered it unreasonably dangerous and that these defects directly caused the wrongful targeting incident. The AI company may raise defenses such as the adequacy of warnings, instructions, and user training provided by the AI company or inadequate initial or continuing training provided by the military.
Legal Considerations and Defenses
Proximate Cause and Foreseeability
In any liability analysis, establishing proximate cause is crucial. It requires demonstrating a direct causal link between the actions or omissions of the entities involved and the harm suffered by the victim. In the case of our hypothetical autonomous drone incident, it must be determined whether the actions or failures of the military member, defense contractor, AI company, or some other third-party company were the direct cause of the wrongful targeting and subsequent harm. Is it possible that any of these entities could have foreseen the malfunction or series of errors that resulted in the errant targeting/deployment of the weapon? Here, again, the testing of the system, both its parts and the system as a whole, one would expect would reveal how often the system performed as expected.
Compliance with Industry Standards and Regulations
Autonomous military drones are subject to a range of laws, regulations, and industry standards that govern their development, deployment, and use. These regulations may include national laws specific to the use of drones in military operations, international agreements related to the use of autonomous weapons systems, and industry-specific standards and guidelines established by organizations or regulatory bodies. As of now, there is no generalized rule requiring those deploying autonomous systems like those used in vehicles or healthcare resource allocation algorithms, to disclose the way in which these systems are tuned, or directed to favor one consideration over another. In the military context, some of the rules involved may well be non-public for national security purposes erecting yet another hurdle for you and your client.
Contractual Agreements
Get ready to read a lot of contracts. The companies involved in the production of autonomous systems will all have contracts with sub-contractors and others, including the military in this hypothetical. Coincidentally, there are a raft of new tools being released every day enabling the automation of contract summarization. Many of these tools also enable a lawyer to pose natural language questions and have an AI tool like Langchain, produce natural language responses about the terms, liability, duration and other contract matters.
Reviewing those contracts, however, will require some technical knowledge. At this point, a zealous attorney would have to use an expert, ideally one who can issue spot legal issues as well as interpret and explain the technical jargon of such agreements. This work includes examining contracts between the military and the defense contractor, the defense contractor and the AI company, and any agreements involving the third-party company responsible for fine-tuning the drone's software.
Some key contract terms to review and organize:
a. Indemnification clauses: These clauses determine whether one party agrees to compensate another party for any losses, damages, or liabilities incurred due to the incident.
b. Limitation of liability clauses: These clauses establish the maximum amount of damages that a party can be held liable for in case of a breach or incident.
Conclusion
The prevalence of autonomous AI tools in our world today is undeniable. They have become an integral part of various industries, from virtual assistants and recommendation systems to healthcare diagnostics and industrial automation. However, the widespread use of these tools raises important questions about liability and accountability when things go wrong.
Analyzing liability in the AI era requires a comprehensive understanding of the complex layers of persons and entities involved in the development and deployment of autonomous AI systems. In the case of incidents like the hypothetical military drone targeting, multiple factors come into play, including the actions of human operators, the design and manufacturing of the autonomous drone, and the software algorithms that govern its decision-making.
Establishing proximate cause and foreseeability is crucial when determining liability. It is necessary to investigate whether the entities involved followed proper protocols, exercised reasonable care, and adhered to industry standards and regulations. The examination should encompass the training, experience, and response of human operators, the design and production processes of the autonomous system, and any potential defects in the software or hardware components.
Contractual agreements between the parties involved also play a significant role. Reviewing these contracts, including indemnification and limitation of liability clauses, is essential for understanding the allocation of responsibilities and potential avenues for seeking compensation.
As AI technology continues to advance, it is imperative to develop clear frameworks and regulations for liability and accountability. The complexity and autonomous nature of AI systems require a multidisciplinary approach, involving legal expertise, technical knowledge, and the ability to navigate the intricate relationships between various stakeholders.
In this rapidly evolving landscape, the legal profession must adapt and stay informed about the latest developments in AI technology to effectively represent clients and address the unique challenges posed by autonomous AI systems.