Lawyers Have an AI Problem
By Christopher Fredericks
It’s Friday afternoon. Work is busier than ever, personal life commitments are on the calendar, impending deadlines are piling up, and email traffic is at full rush hour all the time; in other words, there is never time to get anything done. You decide to try AI to complete a memo, just this once, since you will not meet a deadline without an all-nighter. The AI-produced work product looks pretty good anyways.
Voilà! It worked! The work was accepted, and no one was the wiser. But you swore, it was just that once, and never again. But a few weeks go by, and you find yourself in a similar situation. What if you use it again? This time for a task that is not as much as a priority. And it works, again! It was so easy, you can just rinse and repeat next time. So you get more comfortable using it for more and more tasks. Until eventually, the AI-produced work hallucinates cases, opposing counsel files a motion for sanctions, and a hearing is set with a judge who wants answers.
This is a recurring fact pattern, one that has ever wider implications as more people incorporate AI into their practice, and one that the legal community will eventually need to address. Like MLB players in the late 1990s who took anabolic steroids and human growth hormone, lawyers use AI for a number of reasons, but the main one is to get an advantage: AI, when it works, allows someone to do more than the work of one person. Still, lawyers should take heed of the baseball players who were hitting ‘juiced’ home runs, who had to deal with congressional investigations, criminal investigations, tarnished reputations, and banishments from the Hall of Fame. Given how much lawyers charge and how they are generally viewed by the public, there is similar and considerable schadenfreude heaped towards lawyers ‘caught’ using AI. That derision comes not only from the general public but from the legal community as well, especially from these who are hostile to AI, a/k/a the ‘no shortcuts’ crowd.
But the anti-AI lawyers will eventually face the same choice as typewriter repairmen and switchboard operators: adapt or go extinct. AI is here, and it is or has already worked its way into all facets of a lawyer’s daily life: from assisting with research, drafting memos, capturing billable hours, reviewing discovery, summarizing documents, processing data, and evaluating aspects of class members, inter alia. Both the marvel and the problem with AI is that, unlike other major technological developments that improved humanity in one way, such as railroads improving transportation, AI has the potential to improve everything. And Lawyers are already using it for everything: research, prepping for depositions, reviewing documents and transcripts, creating timelines, drafting briefs, and billing. But with great power comes great responsibility, and many lawyers have proven to be less than responsible users of AI. Though attorneys are being pressured to utilize AI not just from within but also without, as attorneys are being forced to run the AI gauntlet by their clients.
Big Tech is not alone in treating AI as an arms race, and companies in all industries are going all-in on AI. Lawyers, as advisors, need to be AI-savvy to be able to competently talk to, comprehend, and advise their clients about the issues that they face with AI. At the same time, there are pitfalls with every new technology. Given that the majority of lawyers in the U.S. were born before Ronald Reagan became president, many of the individuals tasked with learning the AI will simply never be savvy enough to fully and perhaps properly utilize the tech. Every week, there are stories of another attorney being sanctioned or filings being rejected because of improper AI use. The typical fact pattern involves someone using AI to generate a brief or assist with research, and the AI program cites to cases that simply do not exist. Judges have referred to this as an “epidemic of fake cases”, a “lack of respect for the profession”, and even “a fraud on the court.” Still, the negative headlines do not seem to be dissuading AI use; if anything, it may be drawing attention to it, with the user thinking they can make use of the AI consequence-free, because “I won’t let that happen to me.” But it is not just the lawyers getting caught red handed using AI – it is the other members of the firm who have to deal with the wrongful AI-use allegations, develop and maintain firm AI policies, and deal with the stigma that comes with the Scarlet Letter of AI.
What’s next in the AI legal landscape is hard to predict. The rise of the pro se AI plaintiff is of significant concern, as complaints are already being filed by plaintiffs themselves that have clearly had the assistance of AI and which may be more difficult to dispose of at an early stage. There are signals in surveys that Americans’ willingness to sue has increased considerably in the last decade, suggesting an already litigious place has room to grow. If AI gives just an additional 1% of the population greater access to the legal system with fewer monetary constraints, it has the potential to overload a judicial system already on the brink. However, while this is just starting, it will take years to fully play out, and once it does, Congress and courts will be forced to respond. Legislatures are responding in disparate ways, and the Judiciary is also working to respond to the new AI world. Lawyers will eventually be tasked to navigate these new lands with not much more than a compass in the dark, and there will be claims against lawyers from mis-using and mis-advising on AI.
In the short term, malpractice claims against professionals appear limited to AI client encounters difficulties and blames the lawyer. Though it is not simply the lawyers who view AI with irrational exuberance, which will lead to or “AI Washing” claims: promises will be made to investors that will not be kept, those investors will be looking for someone to pay, and the professional liability policy is often the only potential source of recovery. This is not necessarily different from any new technology with billions of dollars invested in it – there will be losers, and losers look to the only deep pockets available – usually the insurance. Right now, most of the monetary impact around AI claims is mostly limited to disciplinary actions and sanctions. These sanctions have been limited to fines and CLE requirements, as well as a “promise never to use AI ever again.” But the death penalty sanction of default judgments and possible disbarment are surely coming at some point, as judges run out of patience of attorneys taking the use of AI too far. A District Court judge recently stated in an order that “Somehow the message still has not been hammered home as the epidemic of citing fake cases continues unabated…It has become clear that basic reprimands and small fines are not sufficient to deter this type of misconduct because if it were, we would not be here.”
But AI is going to lead to claims, and when there are claims, coverage for AI-related claims may not be clear under every policy, and thus there is a possibility of coverage issues arising from such claims. For example, if an attorney today programmed a robot go to court and argue a motion to a judge, but the robot malfunctions, the case is dismissed, and the lawyer is sued for malpractice: would coverage be available for that attorney? This hypothetical may not be that far off as – like IBM used Deep Blue against chess masters – AI companies are already testing whether AI can argue a Supreme Court case better than a human. This scenario is not that attenuated from having a generative AI program draft a response to a motion, hallucinate cases, and result in adverse affects for the case, the client, and the lawyer. While many are embracing AI, one profession that seems already exasperated by the technology are judges, and it’s not clear that some judges – who may embrace technology late – presented with this scenario will conclude that providing “professional services” as defined by most professional liability policies will extend to AI use. In fact, Insurers are cognizant of this and are already floating AI exclusions to state regulators, explaining that these legacy policies – even CGL policies – were not intended to cover AI. Underwriting will also have to address AI, as an insurer has to decide whether a lawyer who uses AI ‘to do the work of 10 lawyers’ should pay insurance premiums for just one lawyer.
What law firms can do in the immediate short term is to adopt AI effective policies, enforce the policies with regular training, and ensure that lawyers really get the “verify” part of “trust but verify” AI. And although AI is being pitched everywhere, law firm leaders may not be as tech savvy to formulate and enforce these policies. What professional liability carriers can do is to ensure that their insureds have AI policies, ensure that these polices involve training, preferably annual training, and stay abreast of changes in the AI landscape. Adding additional training onto lawyers who have a multitude of training/education each year (CLE, harassment, tech upgrades, etc.) has the potential to overload a profession with an already extremely limited bandwidth, but at the same time it is becoming clear that such training is necessary. AI is a constantly moving target, and where we were in 2024 could be miles from where we will be by the end of 2026 in technology, legislation, coverage, etc., so it is important for leaders to follow AI developments closely. Technology will continue to get better and, in turn, more widely adopted in different aspects of everyone’s business. Carriers may also want to explore certain AI exclusions if AI leads to claims not currently contemplated by these policies. The use of AI is broad – while “professional services” may be broad in these policies, and using AI to facilitate a practice is arguably an extension of using Outlook, LexisNexis, or PACER, coverage for AI use is far from settled.
AI is coming. Dread it. Run from it. It is arriving all the same. While AI offers enormous efficiency gains and is rapidly becoming embedded in every aspect of legal practice, its misuse has already led to sanctions, fake case citations, reputational harm, and judicial frustration. Law firms and carriers must encourage and implement clear AI policies, training, and oversight now, because the technology is advancing faster than the legal profession’s ability to manage. With so many unknowns, the goal in this new AI world is to stay vigilant, ask questions, and don’t be afraid to adapt your approach to the ever-changing world. 1
1 This summary paragraph was drafted with the assistance of AI.
