As artificial intelligence (AI) evolves, representing clients in both civil and criminal matters means you will inevitably encounter the terms “agentic” and/or “Agentic AI” when AI systems are involved. So-called agentic AI systems exhibit autonomy, adaptability and decision-making capabilities resembling human agency. These are the system designs that some are calling AGI (Artificial General Intelligence) or what is the next stepping stone to AGI at least. This post will demystify agentic AI, providing you with a comprehensive understanding of its design, function, and the liability challenges it presents. By the end, you’ll have actionable insights to navigate your cases involving agentic AI effectively and prevent opposing counsel from using its complexity to deflect or obscure liability.
What is Agentic AI?
Agentic AI refers to systems designed to perform tasks autonomously, often by making decisions based on predefined goals or learned patterns. Unlike traditional AI systems that strictly follow programmed instructions, agentic AI can:
• Perceive its environment (e.g., through sensors or data and user inputs),
• Decide on actions based on goals, constraints, and learned behavior, (This is the place among others where tings can get “Terminator”-like). The military is already experimenting with autonomous weapons.
• Act on its environment to achieve its objectives with or without human intervention.
Examples of agentic AI already in use and for years now include self-driving cars, autonomous drones and complex stock or futures trading algorithms. These systems operate with a level of independence that raises questions about accountability when things go wrong. Most, if not all, of these autonomous approaches rely on neural networks, a species of Machine Learning algorithms, that often create the “black box” problem.1
Agentic AI in Action: Hypothetical Case Studies
Case Study 1: Autonomous Vehicle Accident
A self-driving car, operating under agentic AI, misinterprets sensor data and crashes into a pedestrian. It made the decision to maneuver in a way that collided with the pedestrian based upon its agentic architecture which imposed an ethical framework determined by the software designer. Who is liable—the manufacturer, the software developer, or the vehicle owner? Were there unusual features of the environment which acted to interfere with the agentic decision-making?
Case Study 2: AI-Powered Medical Device Malfunction
An AI-powered surgical robot decides to deviate from the prescribed procedure, leading to patient injury. Did the healthcare provider neglect oversight, or did the AI act beyond reasonable expectation? Who programmed the agentic architecture and were the decisions pre-determined into that software aligned with standard medical practice?
Case Study 3: Algorithmic Trading Catastrophe
A financial firm’s AI agent autonomously executes trades that destabilize markets. Can liability be placed on the firm, or does the AI’s “decision-making autonomy” shield them? What are regulations regarding the usage of such autonomous or agentic trading systems? Does the firm’s liability insurance carrier have an exclusion for such tools?
Breaking Down Agentic AI Liability
When an AI system causes harm, you have dissect its functionality to determine liability. Here’s a roadmap:
1. Identify the Role of Agency
• Was the AI functioning as an agent (operating autonomously) or as a tool (executing direct instructions)?
• The more agentic the system, the more critical it becomes to evaluate whether its behavior was foreseeable or intentional.
2. Pinpoint Responsibility
Liability in agentic AI cases typically falls on one or more parties:
• Operator: The individual or entity deploying the AI system.
• Manufacturer/Developer: The party responsible for designing and programming the AI.
• End User: The person or organization that integrates the AI into their operations.
Understanding How AI Agents Work
Agentic Design Components
1. Perception Layer: Collects data from sensors or inputs.
2. Decision-Making Layer: Uses algorithms, machine learning models, and pre-programmed rules to evaluate options.
3. Action Layer: Executes decisions via actuators or outputs.
Types of Agents
• Reactive Agents: Respond to stimuli without long-term planning.
• Deliberative Agents: Make decisions based on planning and reasoning.
• Learning Agents: Adapt behavior based on experience or training.
Some Key Legal Questions to Ask in Agentic AI Cases
1. Design and Intent:
• Was the AI designed to operate autonomously?
• Were safeguards implemented to limit unintended actions?
2. Training and Testing:
• Was the AI properly trained and tested for the environment it was deployed in?
• Were there known limitations or risks disclosed by the developer?
3. Control and Oversight:
• What degree of control did the operator or end user retain?
• Were there mechanisms to override the AI’s decisions?
4. Causation:
• Did the AI’s autonomy directly lead to the injury, or was human intervention involved?
• Was the AI’s action foreseeable based on its design and training?
As of this writing, there are multiple state laws, federal guidelines and the EU AI Act which mandate that persons marketing, purchasing and deploying AI systems maintain transparency about the way those systems are structured, trained, tested, etc. You have to get the latest on those regulations to ensure the potentially liable parties operated in compliance with them.
Defendant Strategies: Using Agentic Design to Obscure Liability
1. Shifting the Blame
Defendants may argue that:
• The AI acted independently and unpredictably, akin to a “force majeure.”
• The harm resulted from external factors (e.g., user misuse or environmental anomalies).
2. Exploiting Technical Complexity
Defendants might inundate the court with technical jargon to confuse the issue:
• Claiming the AI operates in a “black box” manner, making its actions inexplicable.
• Overemphasizing the AI’s autonomy to distance themselves from responsibility.
Countering Defense Tactics: Best Practices for Lawyers
1. Collaborate with Experts
Engage AI experts early to:
• Break down technical concepts for the court.
• Evaluate the foreseeability and preventability of the AI’s actions.
2. Focus on Foreseeability
Demonstrate that:
• The AI’s harmful behavior was predictable given its design and training.
• Adequate safeguards could have mitigated the risk.
• Again, compare the conduct of the potentially liable parties with then existing laws, guidelines and industry best practices
3. Investigate Thoroughly
• Examine the AI’s training data, decision-making algorithms, and operational history.
• Request disclosure of internal documents, such as risk assessments and incident reports.
4. Simplify the Narrative
Frame the case in terms of human accountability:
• Highlight the role of developers, operators, and users in designing, deploying, and overseeing the AI.
• Emphasize the ethical and legal responsibility to anticipate and mitigate risks.
Preparing for the Future of Agentic AI Cases
Regulatory Trends
• The EU’s proposed AI Act establishes strict guidelines for high-risk AI systems.
• The U.S. Federal Trade Commission (FTC) is increasing scrutiny of AI practices, particularly regarding transparency and accountability.
Best Practices for Clients marketing, purchasing and/or deploying AI systems
Advise clients to:
• Implement robust oversight mechanisms for AI systems.
• Maintain detailed documentation of AI design, training, and operational decisions.
• Regularly audit AI systems for compliance with legal and ethical standards.
Conclusion
Agentic AI will challenge the traditional notions of liability, demanding a nuanced understanding of its design and function. By dissecting how agentic AI systems operate and preparing as either the plaintiff or defense attorney, you can effectively represent clients in cases involving Agentic AI. The key is to cut through the claimed complexity, be prepared to fully educate your judge and establish clear lines of responsibility whether you are defense or plaintiff.
What is a Neural Network?
A neural network is a type of artificial intelligence (AI) inspired by the structure and functioning of the human brain. It is a system of algorithms designed to recognize patterns, interpret data and make decisions or predictions based on inputs. You will encounter neural networks in cases involving AI systems like predictive analytics, facial recognition, self-driving cars or financial fraud detection.
How Neural Networks Work:
1. Structure:
• Neural networks consist of layers of nodes (or “neurons”) connected by weights.
• Input Layer: Receives raw data, such as text, images, or numerical inputs.
• Hidden Layers: Process the data through mathematical operations, learning patterns and relationships.
• Output Layer: Produces the final decision or prediction (e.g., “spam” or “not spam,” or a probability score).
2. Training:
• Neural networks are “trained” using large datasets.
• During training, the network adjusts its internal parameters (weights) to improve accuracy.
• Training involves repeated processing of data to “learn” complex relationships.
3. Learning Methodologies:
• Supervised Learning: The network learns from labeled data (e.g., “This image is a cat”).
• Unsupervised Learning: The network identifies patterns without labels (e.g., clustering similar data).
• Reinforcement Learning: The network learns by receiving rewards for desired outcomes.
Practical Example for Lawyers:
Imagine a facial recognition system used by law enforcement. A neural network analyzes input images, identifies patterns (e.g., eye shapes, jawlines), and matches these patterns against a database to identify individuals. If the system misidentifies someone, legal liability and evidentiary challenges may arise.
The Black Box Problem
The black box problem refers to the lack of transparency in how neural networks make decisions. While these systems can deliver accurate and useful results, the underlying decision-making process is often opaque even to their creators.
Why Neural Networks Are a Black Box:
1. Complexity of Operations:
• Neural networks involve thousands or millions of interconnected neurons performing complex mathematical calculations. The resulting patterns are difficult for humans to interpret.
2. Non-Intuitive Behavior:
• The internal weights and operations are not human-readable, making it challenging to trace how specific inputs lead to specific outputs.
3. Emergent Properties:
• Neural networks may exhibit behavior that was not explicitly programmed, such as recognizing unexpected patterns in data.
Legal Challenges Posed by the Black Box Problem:
1. Lack of Explainability:
• Lawyers may struggle to prove how or why an AI system made a decision.
• In criminal cases, this could challenge the admissibility of AI-generated evidence under standards like Daubert or Frye (in the U.S.).
2. Accountability Issues:
• If a neural network produces a harmful result (e.g., wrongful arrest, biased hiring decision), who is liable? The developer, operator, or user?
3. Regulatory Concerns:
• Jurisdictions increasingly require transparency in AI. For instance, the EU’s proposed AI Act demands explainability for high-risk AI systems.