Derek Mobley, a 40-year-old job seeker with degrees in finance and network systems administration, filed a lawsuit in the U.S. District Court for the Northern District of California earlier this year against Workday, a company providing AI-based job screening tools. Mobley alleges Workday’s AI tools have systematically discriminated against him and other applicants from protected categories. According to Mobley, he has applied for 80-100 positions since 2018, all of which used Workday’s screening software, but has not been offered employment.
The lawsuit claims that Workday’s screening algorithms enable preselection of candidates, excluding individuals from protected categories. Mobley alleges these algorithms incorporate human biases, both conscious and unconscious, resulting in discriminatory hiring outcomes. He is seeking class-action status to represent similarly affected individuals and is advocating for changes to Workday’s AI products and practices to promote equitable hiring.
Workday denies the allegations, describing the lawsuit as unfounded. The company maintains that it is committed to developing ethical AI tools, emphasizing that its products undergo rigorous legal and risk-based reviews to prevent bias and ensure compliance with employment laws.
This case underscores broader concerns about AI bias in employment practices, particularly its potential to disadvantage marginalized groups. Critics argue that insufficient attention to diversity during AI development can perpetuate or amplify existing inequalities, making fair employment opportunities a pressing issue in the age of AI.
Is an AI System the Not So Secret Agent?
We are all familiar, generally, with the agency theory in various legal contexts, personal injury, workplace harassment, discrimination, etc. The theory in this case is slightly different. It’s alleging that Mobely is entitled to sue a software company, Workday which includes an AI system in its preselection hiring functions. He alleges that more than 100 companies rejected his job applications all using Workday behind the scenes to screen his an other folks resumes.
Instead of suing the companies themselves who relied on Workday and its AI powered employment screening functions, he seeks to go directly to Workday itself.
In July, Workday successfully had the court dismiss many, but not all, of Mobley’s claims which sent a shockwave through the AI Software world and the world of employment law itself. The decision now allows holding AI systems providers like Workday accountable using an “agency” theory. In addition to employers potentially facing liability for discriminatory hiring practices, the court is leaning toward a liability theory which also permits the imposition of damages against companies deploying AI systems which operate in a discriminatory manner.
AI Systems and Agency Theory
Typically, agency theory referred to a person who was an agent for a corporation or perhaps another person. Think, bodyguard for famous person who uses excessive force in protecting that person and catches a lawsuit for harm to the otherwise exuberant fan. Most of us would recognize the potential liability of the protected under agency related theories like negligent hiring, training, retention, etc. The Mobley case could be the start of a new branch of agency theory. This theory would lay liability for the use of AI systems by purchasers of those systems on both the user and the AI System provider simultaneously.
The court’s explanation included the word “delegation” which is most traditionally referring to delegating a task to another human.
“The plaintiff plausibly alleges that Workday’s customers delegate traditional hiring functions, including rejecting applicants, to the algorithmic decision-making tools provided by Workday.”
It is not illogical or extreme for the court to analogize the agency relationship between humans to a company and its AI system and the claimed discriminatory design, implementation or performance.
The court held that it could see no difference between agency liability between humans and those between humans (or corporations) and AI systems.
Considerations for Your Clients
Obviously, the larger impact of the court’s decision is that AI System providers now have a liability concern extending to each of their customers. For some AI systems, that number could be thousands. It should be noted that the docket currently shows the appearance of lawyers for the federal government via The Equal Employment Opportunity Commission (EEOC). This position mirrors that of the plaintiff in this case underlining the liability concerns for AI System providers. That should not really be a surprise for those of you whom have been subscribers of this substack for the past year. There have been multiple Executive Orders, National Security Memorandums and other guideline documents released by the current administration repeatedly highlighting concerns over discrimination and bias in AI systems. In short, the federal government will be watching AI System providers and likely reinforcing this agency theory in other cases.
Also as highlighted in previous posts here, merely deploying another company’s AI System without careful analysis and testing is asking for trouble. Any injury caused by a company’s use of provider’s AI system is going to create some liability. The best way for a company to try and divert that liability to the AI system provider is to show their efforts at testing, etc. There may even be a role for in-house counsel in negotiating the service agreements with those providers to shift that liability through indemnification, etc.
The court’s findings in denying the motion to dismiss some claims carries some insights for Mobley and future similar cases:
• Employer Liability: Regardless of the promises of a given AI System as to its testing and protection against algorithmic bias or discrimination, the company purchasing and deploying that system still retains liability for injuries caused by its use. There is no “hey, it’s the AI System that did that, not me” defense that will be protective.
• The Lure of Automation Should Come with Scrutiny: AI Systems can and do make traditionally mundane, detail-oriented and tedious tasks much more efficient. However, the trade off in handing those tasks to AI Systems and perhaps saving money on human employees completing those is that a company must know as much as possible about the pre-deployment testing of that system to assess liability.
Your Options as an AI Systems purchaser
Well, to state the obvious, the legal concerns when purchasing technology like servers, office software, database tools, etc are unlike those in purchasing AI Systems. Different perspectives and perhaps different expertise is necessary. A lawyer unfamiliar with AI reading the agreements related to an AI systems purchase is likely to miss legal issues in that language and in the language omitted from that agreement. But, an AI System expert cannot readily evaluate the liability potholes in a legal agreement from an AI System provider. The ideal team then is one that is comprised of both resources and perhaps a unicorn lawyer who also has experience developing and deploying AI Systems themselves.
A review of the current state AI regulations (to be published in the next post here) reveal that many are focused on the use of AI in employment decision-making to protect their residents from bias and discrimination harms that can arise when companies are too reliant on AI Systems.
In The World of HR….It’s Already Happening
One of the first widespread deployments of AI in corporate America has been HR. The proliferation of hiring tools like Indeed, Monster, LinkedIn and others means there are voluminous job seekers and employers trying to efficiently match up. It makes sense that AI Systems tuned to these interactions would be readily reviewed and tested by companies. The efficiencies here would be incredible for AI Systems than can get that process even 50% better.
Why Ask? What To Ask?
Algorithmic Transparency
• Can you explain how the AI system works, including the algorithms used for screening applicants?
• How does the system prioritize or rank candidates, and what factors are weighted most heavily in its decision-making process?
• Are there mechanisms to ensure the system’s outputs can be understood and justified by human users?
Bias Mitigation and Testing
• Has the system been tested for bias against protected classes (e.g., race, gender, age, disability)?
• What methodologies or frameworks were used to test for bias (e.g., disparate impact analysis)?
• How often is the system audited for potential biases, and who conducts these audits (internal team, third-party, etc.)?
• What steps are taken to address and correct bias if it is identified in the system?
• Do you comply with specific state regulations requiring regular audits for bias (e.g., New York City’s Local Law 144)?
Training Data
• What is the source of the training data used to develop the AI model?
• Does the training data include diverse representations across race, gender, and other protected categories?
• How is the data updated to reflect evolving workforce demographics and hiring patterns?
• Has the training data been vetted for historical biases that could influence the AI’s decisions?
Compliance with Laws and Regulations
• How does the system ensure compliance with federal anti-discrimination laws, such as Title VII of the Civil Rights Act, the Americans with Disabilities Act (ADA), and the Age Discrimination in Employment Act (ADEA)?
• How does the system address state-specific AI and anti-discrimination laws (e.g., Illinois AI Video Interview Act, California Civil Rights laws)?
• Can you provide documentation of legal reviews or certifications demonstrating compliance with relevant employment and AI laws?
5. Customization and Oversight
• Can the system be customized to align with my company’s specific hiring criteria and non-discrimination policies?
• Can human decision-makers review and override AI-generated decisions to ensure fairness?
• What role does human oversight play in monitoring the tool’s outputs and decisions?
Error Handling and Accountability
• How does the system handle errors or anomalies in applicant screening?
• If an error leads to an unfair outcome for an applicant, what remedies are available?
• Does the provider assume liability for discriminatory outcomes caused by the tool? If not, how are liability risks allocated in the terms of use or contract?
Applicant Notification and Consent
• Does the system notify applicants that AI is being used in the hiring process, as required by some state laws?
• How does the system handle requests from applicants for explanations or challenges to AI-based decisions?
• What options are available for applicants to opt out of AI screening or request alternative methods of evaluation?
Data Privacy and Security
• How is applicant data stored and protected, and what steps are taken to ensure compliance with data protection laws (e.g., GDPR, CCPA)?
• Are local laws related to data retention and destruction policies adhered to by the system?
• Who has access to the data, and is it shared with any third parties?
Monitoring and Updates
• How often is the system updated to reflect changes in legal requirements or best practices in hiring?
• What is your process for notifying customers about updates or legal changes that could impact compliance?
Documentation and Support
• Can you provide documentation of audits, certifications, or legal reviews demonstrating the system’s compliance with anti-discrimination laws?
• What training or resources do you provide for companies to ensure proper and compliant use of the tool?
• Will you assist in responding to government inquiries or audits related to the use of your system?
Going Forward
I didn’t title this last paragraph “Conclusion” on purpose. There really is not a conclusion to these considerations, just an ongoing consideration of the liability concerns as the regulations and laws evolve and case law with them.
When purchasing AI tools for hiring and other HR-related tasks, companies must carefully evaluate how these systems align with legal, ethical, and operational standards. Ensuring compliance with anti-discrimination laws and minimizing liability requires asking the right questions about bias mitigation, transparency, and regulatory adherence. Organizations should seek AI providers that prioritize fairness, accountability, and robust auditing practices while maintaining human oversight in decision-making. By taking these proactive steps, businesses can leverage AI’s efficiency while safeguarding equity and trust in their hiring processes.