New European Union AI Act
The Brave New World of AI Regulation
Just this past week, the European Union adopted the AI Act governing the development and implementation of AI throughout its jurisdiction. Given that the most prominent companies seeking to develop and/or implement AI will have business interests in the EU, the passage of the AI Act is something we all have to contend with.
What Is It Though?
The definition of AI which the Act relies upon is the following: Software that can “for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations or decisions influencing the environments they interact with.”
Instead of a summary or paraphrase, here is the precise language espousing the purpose of the AI Act:
“[To] improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, marketing and use of artificial intelligence in conformity with Union values [and to] ensure the free movement of AI-based goods and services cross-border, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation.”
The purpose is explicit enough. It also overrides all other EU member states’ regulations now and in the future.
The Details (Where you-know-who is always found)
The initial paragraph mentions an interest in restricting the use of AI in “publicly accessible spaces” for “remote biometric identification.” The EU apparently does not like governments setting up a persistent public surveillance of citizens. The regulations also pertain to any company developing and implementing AI regardless of whether that company is headquartered in an EU member state.
The regulations do exclude the regulation of AI when deployed in military organizations. They reference other EU regulations which are expected to manage that.
The focus of the regulations intended outcomes is to follow a “risk-based approach” which implements rules with a severity of restriction that approximates the risk of the AI practice or tool at issue. The regulations call these “high-risk AI systems” and some of the high-risk uses of AI are prohibited entirely. “AI systems intended to distort human behaviour, whereby physical or psychological harms are likely to occur, should be forbidden.” It is unclear from the regulations which circumstances or even existing events prompted this prohibition. It is also uncertain whether the drafters of this regulation have ever heard of the psychologically-based reasons for the various visual features of apps like Instagram and Facebook. News flash, many companies are already using AI powered systems to influence human behavior.
The use of AI systems to setup a system of “social scoring” (Another reference to existing countries desire for persistent surveillance) is also prohibited.
Law Enforcement Use
The regulations prohibit the “real time biometric” use of AI systems except in three specific circumstances:
The search for potential victims of crime, including missing children;
Certain threats to the life or physical safety of natural persons or of a terrorist attack; and
The detection, localisation, identification or prosecution of perpetrators or suspects of the criminal offences…if those criminal offences are punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years and as they are defined in the law of that Member State.”
In addition to those exclusions to the applicability of the act, it also exempts “AI systems used for the sole purpose of research and innovation, or for people using AI for non-professional reasons.”
Even in these exceptional circumstances, the regulations posit a judicial officer approving their use. (The 4th Amendment is looking pretty good right now).
All AI systems used in the EU will have to undergo a “fundamental rights impact assessment” before deployment. Some AI products/services will be required to register in an EU database intended to document high-risk AI systems.
If It’s Not That Serious
For systems that are not considered “high-risk” the regulations are intended to tread lightly and perhaps only require a disclaimer on such content that it was produced using AI.
A committee will be formed inviting participation by all EU member states to help create and update regulations designed to carry out the purpose of the act. That committee will also form the rules which categorize a given AI system as “high-risk” or not.
Violations of the act will invite the imposition of financial penalties. For most companies, that fine will be calculated based upon the company’s most recent year of revenue. It makes exceptions to reduce those fines for small entities or start-ups with a nod toward the realization that their financial status would be more impacted by the percentage fines for larger players.
Thankfully, for AI developers, the AI Act applies two years from now. So, companies have until December 2025 to ensure their AI products/services can comply with the Act’s provisions. Undoubtedly, the working out of regulations to implement the act will both be drafted and evolve over that timeframe.
As we have often written here in these pages, anyone claiming to be able to predict the future of AI is not to be trusted. In just over a year, ChatGPT has fueled a revolution in interesting LLMs. Let’s take a moment to realize that LLMs are merely one kind of generative AI. (Here are a few more, some of which we have discussed in various posts for Legal AI. Generative Adversarial Networks, Music generation, Data augmentation, synthetic data generation, etc). One fairly confident prediction - by the time the EU AI Act is to be enforced, late 2025, more than one new AI product or service none of us have thought of today will be wondering how to comply with this and other laws yet to be enacted.