Decoding the Directive
Navigating AI Regulations in the new Executive Orders On Artificial Intelligence
Just in time for this past Halloween, the White House released an Executive Order covering multiple aspects of the design, distribution and use of Artificial Intelligence.
The Breakdown
Safety and Security
Responsible Innovation
Supporting American Workers
Equity and Civil Rights
Consumer Protection
Privacy and Civil Liberties
Retaining AI Trained Members of the Federal Workforce
Safety And Security
This section of the order poses some interesting goals. One of the complaints about AI tools from many perspectives is that before and after their public release the testing of those tools remains opaque. As part of safety, the order requires “[t]esting and evaluations, including post-deployment performance monitoring….” It also argues for developing “mechanisms” to enable the public to decipher when content is generated by AI. This part of the directive is similar to the requirements of the EU AI Act. Seems like LLM developers and those who use them should get used to providing pre-production test results and ongoing production testing of their models.
What Does Generated By AI Mean?
I am writing this post using Substack. If there is some use of AI behind the operation of this site, is what you are reading “generated by AI?” Or, does this mean the content is entirely generated by AI? Partially generated by AI? If the term applies to content that is partially AI generated, what percentage is required before the content is to be brought under this requirement? So much content these days is partially AI generated.
Canva has been rapidly deploying AI tools to augment its visual and graphic production tools. Nearly all content Canva users produce is partially AI generated and some of it is completely AI generated. Will this portion of the order require Canva to apply a permanent “AI Generated” stamp to all content its users produce? What about users who do not use any of the Canva AI tools but using the “regular” AI tools? Could some of those regular tools have AI working behind the scenes to make them more effective? Do users have to know that? If users produce content without some designation that AI was used in its production, because they did not know it was, and it is released, who is liable for the failure to properly label the content? The administration of this requirement just for this one company seems daunting.
What about text? ChatGPT produces millions of words of text content a day. Will this portion of the order require that any ChatGPT output can only be copied to a user’s MS Word document have an “AI generated” stamp or footnote on it? All of us when consuming news, audio and video content, want to be sure we are alerted when watching something apparently real that is actually fake. But, once an AI tool distributes its output, visual or text, to a user, what is to stop a user from removing any “AI generated” designation? Will a federal bureau be created to act as a filter for any content that is distributed online to ensure its AI or non-AI provenance? If so, the late 1940s are calling….and it’s George Orwell. I used to joke to lawyers in CLE presentations about the fact that any data found online must be true because “it all goes through a government truth filter before appearing online.”
Responsible Innovation
This portion of the order seems to be reacting to the several pending class action lawsuits by artists/authors against the makers of tools like MidJourney and ChatGPT. “[T]ackling novel intellectual property (IP) questions and other problems to protect inventors and creators” is precisely what these various courts are being asked to tackle.
The remainder of this section proposes the laudable goal of promoting programs to upskill Americans “in the age of AI.” A more ominous tone accompanies the goal of “stopping unlawful collusion and addressing risks from dominant firms’ use of key assets such as semiconductors, computing power, cloud storage, and data to disadvantage competitors.” Uh, what? Is the subtext this aimed at AWS, Microsoft and Google? Who else could it be referencing since those are the three dominant cloud providers worldwide? I am curious to learn more about how these companies and perhaps others are colluding to stifle innovation. The recent New York Times lawsuit against OpenAI and others (discussed in a recent post here) also makes this requirement even more interesting to ponder.
Supporting American Workers
The key component of this section focuses on the workplace. The administration appears concerned about the use of AI to “undermine rights, worsen job quality, encourage undue worker surveillance, lessen market competition, introduce new health and safety risks, or cause harmful labor-force disruptions.” Some of these goals seem straightforward enough. Probably few of you reading are looking forward to the day of more AI powered surveillance at work. But, phrases like “cause harmful labor-force disruptions” are amorphous enough to be an eye of the beholder creature of unlimited growth. While predictions are historically unreliable, even those made by so-called experts, the purpose of AI thus far has been to replace tasks that were exclusively done by humans and took much more time (and corporate cost). So, my ability to predict that AI will cause “labor force disruptions” is no indication I should open up that long awaited psychic or palm reading storefront.
The order wants the future of AI to be built upon the “views of workers, labor unions, educators, and employers.” Fair enough. Let’s have everyone involved in how AI regulation should be fashioned to do the most good and the least harm. As I have said in previous columns, it is likely the first wave of labor force disruptions, that knowledge workers will be affected far more than drywall installers or electricians or plumbers. Of course, they are workers too, but are they in the pantheon of fields that this order is considering when it uses the term “workers?” Yet to be seen.
Educators have much to worry about here. For more than two years now various AI tutor platforms have been discussed and some already implemented. (Check out Khan Academy). I am still trying to muster the counter argument to the claim that AI tutors, with memory, constantly learning the unique capacities and struggles of every individual student, their best learning methods, etc. will be far superior to human instruction as it is designed today. During COVID the advocacy by those representing teachers that “zoom school” was a suitable replacement for in person learning is returning to haunt them. AI is now ready for its moment as the kids say. Synthesized voices have been perfected. (Listen below, a piece of audio created for free in less than 5 minutes including the time it took to sign up for the trial account).
AI Tutors
The synchronization with these synthetic voices and full motion synthetic video is perhaps 12 months away. There are already free tools using a memory retaining socratic learning approach simulating a potential future for AI tutors. Once those are rolling, having an AI tutor that knows your child’s unique learning styles, adapts by subject, remembers their past successes and difficulties and is constantly updated and tuned to maximize their learning progress….human educators are going to be replaced. This is not an “if” it is a “when.” Will charter school voucher programs pay for these AI tutors? Will states or the Federal Department of Education seek to outlaw or limit their use in response to the inevitable waive of educator losses? Can parents tune the AI tutor to ensure it delivers their education in the style they desire? Liberal?, conservative?, religious?, atheist?, Southern?, Northern?, more artsy?, less technological?
Advancing Equity and Civil Rights
This portion seeks to, in part, “ensure that AI complies with all Federal laws and to promote robust technical evaluations, careful oversight, engagement with affected communities, and rigorous regulation.” As with any industry, the advancement of regulation and pace of innovation are opposing forces. This section follows with a nod toward imposing liability on those producing AI tools for “unlawful discrimination and abuse, including in the justice system.” That seems straightforward enough. Intentionally designing some AI powered system to discriminate seems clean enough a paradigm to impose liability. But, just like any human designed thing, the best of intentions can go wrong. That whole unintended consequences thing. Is the potential here that companies who design their tool in good faith to avoid these pitfalls but nonetheless in practice their tool results in discrimination going to be punished? Seems so. Preventing unlawful discrimination is always a good thing. Punishing innovation when it inevitably, but unintentionally, goes wrong, might not be a philosophy the public wants regulators to follow.
Consumer Protection
Here the EO reminds AI developers and those companies intent on using these tools that the enforcement of consumer protection laws will not deviate merely because the thing causing some harm to a consumer has AI in its name. I can understand regulators concern about the “black box” argument of LLMs being used by developers and those using models in production with a hands up sort of posture when asked who is liable to this injured consumer. The types of injuries the order lists include “fraud, unintended bias, discrimination and infringements on privacy.” The order nods toward healthcare, financial services, education and other important areas where AI tools are being deployed and will continue to be deployed. Not all technophobic, the consumer protection section also emphasizes that the current administration “will promote responsible uses of AI that protect consumers, raise the quality of goods and services, lower their prices, or expand selection and availability.” That all sounds good as well.
Retaining AI Trained Members of the Federal Workforce
Like many corporations today, the federal government realizes that to continue to thrive itself, it needs to train and retain AI qualified employees at various levels in all departments. The ability to regulate AI necessarily means having highly educated, trained and experienced folks in AI willing to work for the federal government in many of the agencies we call easily list, FDA, HHS, Treasury, DOD and so forth. Attracting folks with that set of skills will be a challenge. My recollection is that the highest paid federal employee a few years back made a salary slightly higher than the President’s $400,000 or so a year. Check out any job board today posting positions for AI qualified folks to lead teams or departments in the private sector and the starting salaries are often $300,000 not including equity stakes and the chance to hitch a ride on a start-up rocket ship as well. It will be interesting to see how the government and state governments as well go about structuring compensation to attract the people they will need to understand AI, draft beneficial legislation and implement it.
Summary (This summary was not written by AI)
One of the easy tasks for AI these days is summarizing long texts. But, consider what that means. Somewhere, someone has designed a tool that makes decisions about how to approach summarizing. A human (and most often many humans) are behind the decisions as to what data to use to train various models, how to fine-tune that data, how to edit the acceptable outputs, etc. These models, therefore, are not truly black boxes. Yes, it will be arduous and tedious to test them, analyze the results, review them, etc. Regulators have their work cut out for them. But, to blithely claim that the black box nature of how the neural networks underlining modern day LLMs work means no one really knows how the tools harmed a citizen is just not going to survive contact with regulation. As we all know, lawyers want to determine who is liable for an injury. Most effectively, they want to determine who is liable that actually has resources to pay the damages suffered by their client. The drive to make that determination means, more than ever, lawyers need to have some basic understanding of what goes into creating and deploying these tools for one basic reason - you need to know who to sue. The second reason, you need to know who is trying to spin you as to their liability and who is likely the proximate cause of your client’s injury. To know that, you have to know AI…to some extent. But, you are all reading this, so you know that. Eventually, the lawyers who are maintaining an understanding of AI basics will grow an advantage over other lawyers that cannot be covered by those simply putting their head in the sand.
One final note: Chief Justice John Roberts released his 2023 year end report. The thrust of the report, technology is here, especially AI and it will dramatically change the law. His urgent insistence, though, was that humans still need to be in the loop - wearing robes no doubt.