Anyone who has done criminal defense work is well aware of the appellate claim of ineffective assistance of counsel. Does it mean malpractice level incompetence? Nope. I had many cases when I was a prosecutor in which trial level defense attorneys encouraged subsequent appellate lawyers to urge reversal on ineffective assistance. Quotes like, “hey if that works to get the client’s case reversed, great!” They all seemed to understand that all criminal defense lawyers inevitably choose one strategy to the exclusion of all others, obviously. In so doing, it leaves convicted clients with buyers or strategists’ remorse a fertile argument. “Hey, my lawyer should have _______________.”
We have all seen this argument and its rate of success which is likely less than 10%. But, that does not stop convicted defendants from making such a claim in nearly every appeal. Inevitably, in some cases, they are correct about the ineffective assistance and win. In other cases, they are correct, but lose. Finally, in the last category, they are incorrect that they were provided ineffective assistance and they lose that claim.
Ineffective AI
Into the realm of ineffective assistance comes a recent motion filed in the case of U.S. v. Michel Prakazrel in the federal court in the District of Columbia. The defendant more well known as the rapper Pras Michél of the group the Fugees, (not well known to me, but more well known in the music world) was convicted of conspiring to defraud the U.S.. How? you ask. The accusation involves the defendant taking $88 million to accomplish the task of introducing a Malaysian business person to Barack Obama and Donald Trump. Unclear where the introductions ever happened, but the conviction did. And, Mr. Prakazrel is unhappy with his trial lawyer’s performance. Well, to be precise, he is unhappy with what he claims is his lawyer’s over-reliance on AI to prepare and conduct the trial. Wait, what? Using AI made his lawyer worse? That’s his claim? That is definitely not what we have been led to believe about AI. Wasn’t it supposed to make a bunch of jobs obsolete and turn many of our normal personal interactions into touch screen beeps with a soulless robot at the Panera? Whelp, it appears that the defendant here is not ready for the AI revolution in lawyer and he is pretty mad about its use in his case.
In the motion for new trial made to the trial court, many errors by the court are alleged, but in the first 10 pages or so can be found this quote:” “These errors alone warrant a new trial, but the ineffective representation by Michel’s trial counsel…leaves no doubt that a new trial is required.” Id. at 10.
Experimentation
The AI related allegations also point to a claimed conflict of interest between the defendant’s trial counsel and the AI company used, in part, to prepare his defense.
And he used an experimental artificial intelligence (AI) program to draft the closing argument, ignoring the best arguments and conflating the charged schemes, and he then publicly boasted that the AI program “turned hours or days of legal work into seconds.” It is now apparent that Kenner and his co-counsel appear to have had an undisclosed financial stake in the AI program, and they experimented with it during Michel’s trial so they could issue a press release afterward promoting the program—a clear conflict of interest.
The support for this claim is outlined starting on page 31 of the motion. In addition to the conflict of interest claim, new counsel argues, well, that the AI lawyer was ineffective.
“The AI company touted [its use in the defendant’s trial] as the first use of ‘generative AI in a federal trial. It showed. [Defense counsel’s] closing argument made frivolous arguments, misapprehended the required elements, conflated the schemes, and ignored critical weaknesses in the Government’s case. The closing was damaging to the defense.”
Consider An Over Reaction
So far in this argument, new counsel is claiming that trial counsel relied on an AI tool which did a crappy job creating content for his closing argument. Fair enough. Perhaps the AI tool did provide bad turns of a phrase, or constructed the closing in some robotic, clumsy way. However, how is that different than me relying on what turns out to be my sloppy co-counsel or junior lawyer at the firm to run down an issue or two for my closing? I am still on the hook for whatever it is I presented. I can internally blame other staff if they actually made the relevant mistakes, but I cannot say to a court, “not my fault, it’s this lawyer here that messed up.” Seems the same for AI or whether you didn’t use some available legal case research tool, etc. Use a super computer or run your entire trial on a legal pad and an endless supply of pens, it’s still the lawyers obligation to provide a suitable defense.
Some judges and lawyers somewhere are inevitably waking up to this story and asking their staff, “should we just ban using AI?” The mysterious black box that is generative AI (tools that accept prompts and produce output) may be a source of wonder and fear, but they are just tools. Some will be better than others. Some will hallucinate fake cases. Some will create great metaphors, analogies and full emotive stories that will make many lawyers’ closing arguments that much more effective. The tool is not the thing. These are merely choices that lawyers make and have made for years using all manner of what we now regard as primitive tools up to and including AI now. Let the over reaction countdown begin nonetheless.
The Rapping Evidence
In a bit of unintentional Behind The Music levity, the new lawyer dropped this footnote in the motion.
[Counsel’s] reliance on an experimental AI program may also explain why the closing argument misattributed a Puff Daddy song to Michel’s group, the Fugees. [Counsel] asserted that the Fugees had a song with the lyrics, “Every single day, every time I pray, I will be missing you.” In fact, those lyrics are by Puff Daddy. He also misattributed Michel’s worldwide hit “Ghetto Supastar (That is What You Are)” to the Fugees, when it was actually a single by Michel.
Could not this error also be attributed to….the lawyer being old? The lawyer liking classical music and not having any idea who Puff Daddy is? (Is that even the person’s fake name still, Puff Daddy? Someone please drop a note on that one). I know who Puff Daddy is. I could spot him in a photo. I only know he once dated Jennifer Lopez. (Coincidentally, the Malaysian man involved in the alleged plot was named Jho Low. Just one letter away from J. Lo.). I would use all my lifelines to answer a trivia question like “name just one song written or sung by Puff Daddy.” I don’t think that means I relied too heavily on AI. I think it means I relied too heavily on less obscure music.
More seriously, new counsel claims that defense counsel “conflated” two different alleged schemes confusing and degrading the defense.
The Summary
New counsel’s argument on these points is summed up on page 33 of the motion: “At bottom, the AI program failed Kenner, and Kenner failed Michel. The closing argument was deficient, unhelpful, and a missed opportunity that prejudiced the defense.”
Conflict
Perhaps more concerning for the court is the claim that the AI tool that trial counsel relied upon was own by a company in which he invested. New counsel claims that “The reason [trial counsel] used the experimental program during Michel’s trial and then boasted about it in a press release is now clear: [He] wanted to promote the AI program because [he] appear[s] to have had a financial interest in it.” Id. at 52.
That takes the claims outside of the realm of AI and into more familiar territory, conflict of interest. It is undoubtedly inappropriate for lawyers to rely on a tool, person or some other technique in preparing a defense for a purpose other than what they believe is in the best interest of their client. The question this argument poses is, to what degree did trial counsel rely on the AI tool? Was the AI tool tested before trial counsel began using it? Are those test results preserved to be examined now? Is the tool related to any other existing tools? Readers of this resource know well that even the most costly to create tools like ChatGPT hallucinate like a hippie on their third sleepless day at Burning Man. Did this AI tool used in the defendant’s case also hallucinate? The reason that these questions and others are critical is, perhaps the AI tool was superior to what defense counsel would have done otherwise. Who's to know at this stage. There is a bunch of electronic data to probe that is, if it exists, in the possession of those in control of the AI tool at issue.
The evidence submitted as part of the motion leave little doubt that an AI tool was used in preparation of the defense. It quotes a press release by a company named EyeLevel which read in part “EyeLevel.AI's litigation assistance technology made history last week, becoming the first use of generative AI in a federal trial. The case involved Pras Michel, a former member of the hip-hop band The Fugees.” The release also quoted trial counsel stating: “This is an absolute game changer for complex litigation….”
The Way Forward
There is no doubt that lawyers and law firms, large and small practices, will continue weighing the use of AI tools and now, whether to disclose their use at all. Will malpractice carriers begin to exclude their use or require insureds to attest they are not using such tools? Perhaps malpractice carriers will insist that lawyers disclose which AI tools they are using to properly assess risk in underwriting. Are there companies poised to evaluate such tools in advance of their use so that carriers can assess the risk of their use? Many unknowns there.