We have written here (and it has been written elsewhere) about the existing and emerging regulation of AI. (Of course, one of the fundamental problems with such regulation is getting everyone to agree on a definition of what AI is). Leaving that aside for the moment, most of the regulation so far focuses things most reasonable consumers are concerned about, to wit: AI integrated tools being used to discriminate or otherwise unlawfully harm individuals. However, the regulations and guidelines thus far have also aligned along another domain - transparency and record keeping.
“The palest ink is better than the best memory.” Chinese Proverb
The typical AI based application has several common steps in the development of an AI model whether it is one that processes text (the most commonly deployed right now), images, video or audio.
Plan and Design (an application using AI) → Collect and Process Data → Input the Data into the AI (also known as training) → Build and Test the Model → Verify and validate that model’s operation → Gathering and analyzing model output → apply model to real world customer data → Final model output analysis → Deployment → continued testing/documentation.
All along in this process, the various regulations do now and will in the future require record keeping. But, record-keeping for what? The obvious answer is litigation. The other value of the record-keeping is establishing regulatory compliance and providing evidence in discovery that ideally bolsters a party’s defense to liability. But, what is to become of a failure to keep such records? The regulations thus far do not spell out a consequence for such failures. Losing a right to continue business? A fine? A damage award in later litigation? Despite the regulations failing to clearly state these consequences, undoubtedly, courts will not appreciate in litigation the fact that record keeping requirements were not followed by any organization that either sells or purchases an AI powered tool and deploys it in a way that harms others.
The Frameworks Are Already Developing
The National Institute of Standards and Technology (NIST) is a division of the U.S. Department of Commerce that establishes standards for a wide range of industries.
NIST has published the Artificial Intelligence Risk Management Framework summarizing it into these sections: Framing Risk, AI Risks and Trustworthiness, Effectiveness of the Risk Management Framework (RMF) and the core of the NIST suggested RMF. (Govern, Map, Measure, Manage).
The RMF Playbook is then broken down into multiple subsections detailing the relevant information to be considered at every point in the AI auditing workflow.
Govern: This section of the audit considers the currently applicable AI regulations.
Manage: This section considers steps to be taken once the AI powered tool is deployed
Map: The ongoing analysis of the actual real world effect of that AI powered tool to ensure it is aligned with applicable regulations and the organization’s risk-tolerance.
Basic AI Auditing Principles
Disclosure: We here at Legal AI have already been working with clients on auditing their developed or purchased AI powered tools. What is below is a portion of the Core Auditing Principles we have been establishing.
Core Auditing Principles
The fundamental purpose of auditing AI systems is to ensure they are safe, secure and beneficial to your organization’s purposes in deploying them. All organizations have risk profiles that govern their use of a wide range of tools, processes and strategies. AI systems are merely one part of that overall strategy and their use should align with existing principles the organization already adheres to.
Safety and Security Audits
Perform thorough security risk assessments focusing on vulnerabilities related to biotechnology, cybersecurity, and critical infrastructure.
Develop continuous monitoring tools to ensure AI systems function as intended over time and adapt to new threats.
Innovation and Competition Audits
Verify compliance with intellectual property laws and promote fair competition in the development of AI technologies.
Assess market impact and prevent dominant firms from using key assets to disadvantage competitors.
Collaboration and Worker Support Audits
Evaluate stakeholder engagement, including workers, unions, and civil society, in the development of AI systems.
Ensure AI deployments in workplaces do not undermine workers' rights or worsen job quality.
Tool Development for Auditing
Develop automated tools to identify biases in AI systems and ensure compliance with data privacy laws.
Incorporate manual oversight components, including expert reviews, to provide insights beyond what automated tools can offer.
Certification and Reporting
Establish a certification process that awards compliance badges to companies meeting established AI standards.
Produce comprehensive audit reports outlining findings, recommendations, and an action plan for remediation of identified issues.
Training and Education
Offer training sessions for companies on responsible AI practices and the importance of compliance with AI regulations.
Provide educational resources to help stakeholders understand the auditing process and its benefits.
Stakeholder Collaboration
Regularly update auditing practices based on feedback from industry experts, government bodies, and academic research.
Participate in forums and panels to stay aligned with national and international AI safety and ethics standards.
Continuous Improvement
Establish channels for continuous feedback from audited organizations to refine and enhance the auditing process.
Stay abreast of advancements in AI technology and regulatory changes to ensure auditing services remain relevant and effective.
Another important section for many organizations will be enhanced auditing criteria to emphasize ensuring civil rights for customers/users.
Of course, a training program aligned with the organization’s values and the applicable regulations is also important to provide. After all, AI is just math and algorithms. The ultimate designers, users and, hopefully, beneficiaries, are humans. Training people involved in the entire workflow will continue to be important.
Conclusion and Going Forward
The existing and emerging regulation of AI is a complex and multifaceted issue that requires careful consideration of various aspects, including transparency, record-keeping, litigation risk, and compliance with regulatory frameworks. The NIST Artificial Intelligence Risk Management Framework provides at least one comprehensive approach to managing AI risk, emphasizing the importance of governance, mapping, measuring, and managing AI-related risks. The framework also highlights the need for ongoing analysis and monitoring to ensure that AI systems align with applicable regulations and organizational risk tolerance.
The core auditing principles in any useful AI Auditing Program will provide a solid foundation for ensuring the safety, security, and beneficial deployment of AI-powered tools. In this context, record-keeping and documentation will play a critical role in establishing regulatory compliance and maximizing the ability of organizations to defend against claims that will inevitably arise when AI powered tools do not function as intended. One thing is clear: the importance of transparency, accountability, and stakeholder engagement will only continue to grow. As we move forward, it will be essential to strike a balance between promoting responsible innovation and protecting individual rights and interests.