Insurers Need to Quickly Adapt to Artificial Intelligence
Introduction
It’s hard to pick up a newspaper or turn on the TV and not see someone discussing Artificial Intelligence. On every earnings call with investors, companies are being peppered with questions on how they are going to implement AI into their business. In turn, clients are asking their professional service providers how AI will improve their services. As companies start to utilize this new technology, there will be bumps along the way, those bumps may eventually lead to claims, and both firms and their insurers need to be prepared for what is coming.
What is AI, and What Will it Do?
AI is all the rage these days, but what exactly is AI and what does it do? AI has been discussed as a theoretical possibility since Alan Turing came up with the theory of computation, and the idea for an “electronic brain” was coined in 1943. Universities and the Defense Department started researching AI shortly thereafter, but the technology was not yet ripe for development, and an “AI winter” followed. Other than fictional stories such as 2001: A Space Odyssey, The Terminator, and The Matrix, AI was always just over the horizon and a Hollywood trope for a potentially dystopian future.
Fast forward to November 2022 when Chat Generative Pretrained Transformer, commonly known as “ChatGPT”, burst onto the scene, unofficially kicking off ‘the age of AI.’ In short, AI permits machines to perform tasks that customarily require human power beyond regular computational tasks. Like railways, the telegraph, the telephone, radio, and the internet, early reviews are that AI is a legitimate technological development that will probably change the world as we know it.1 If true, it is really just a matter of when these changes will impact our industries, such as the professional service industry. Some reports are that AI is going to replace tens of millions of jobs in the US, but as Mark Twain once said, “the reports of my death are greatly exaggerated.” Also exaggerated is the belief that AI is going to replace professionals such as lawyers, accountants, and consultants. Instead, “the battle won’t be between humans and AI but between humans with AI and humans without AI.”2
Currently, firms are utilizing AI in a variety of ways. For instance, for law firms, time-consuming tasks such as research, document drafting and review, billing, and data entry may soon be done in a fraction of the time.3 AI can also assist with painstaking manual discovery preparation, and synthesize relevant case law for a cohesive narrative for motion practice and trial preparation. Additionally, finance and accounting is heavily supported by technology, and data needed to be reviewed by accountants and auditors becomes difficult as companies expand and the size of the data becomes daunting. For accountants who can quickly review, assimilate, and translate the data is a key advantage that AI may make financial work infinitely more efficient. Accounting firms are already using AI to assist in automating complex tax compliance tasks and implementing AI-driven predictive analytics, and AI-powered audit tools may soon be available to assist with a major source of auditor risk: fraud detection.4 In turn, this increased efficiency may result in lower fees for clients and allows for professionals to take on more work. Finally, directors and officers at companies all over the globe are looking at ways for AI to improve the bottom line in industries from agriculture to space tourism. People will be pushing the boundary about what is possible, as humanity always has.
At the same time, the pressure innovate comes with risks that are exacerbated by lack of guidance, both historically and legally.
The Current Rules
As it seems that everyone is jumping into the fray with AI, it is important for these effective beta testers to know what the rules are. But as Doc Brown said in Back to the Future, “where we’re going, there are no roads.” As firms race to be at the forefront of this new technology, they are racing through the Wild West, as guidance is considerably limited by lack of experience, and what guidance is there is still in the infant stages.
On a federal level, highlighting the realization that AI innovation is necessary to maintain technological advantage, combined with the hesitancy to proceed full speed ahead towards an AI future without guardrails, a recent blockbuster movie reportedly spurred President Biden to take action.5 On October 30, 2023, the Biden Administration issued a landmark executive order entitled “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”6 This order addressed eight principles and priorities including: (1) ensuring the safety and security of AI technology; (2) promoting innovation and competition; (3) supporting workers; (4) advancing equity and civil rights; (5) protecting consumers, patients, passengers, and students; (6) protecting privacy; (7) advancing federal government use of AI; and (8) strengthening American leadership abroad.7 Biden stated that the order “represents bold action, but we still need Congress to act.”8 Congress is still deliberating on whether to regulate and how, and are calling on the nation’s biggest technology executives’ advice for guidance.9 The order has received mixed reviews.10
On a state level, this year, at least 25 states, Puerto Rico and the District of Columbia introduced artificial intelligence bills, and 15 states adopted resolutions or enacted legislation. Some statutes are exploratory or administrative, some are restrictive and are cautionary on what AI means, and some are entrepreneurial, expanding state grants to technology that includes AI. For example:
- North Dakota passed a law specifying that defined “person” and specifically excluded AI from that definition;11
- Connecticut required an agency to inventory all systems used by state agencies that employ AI;12 and
- Maryland established the a Technology Grant Program to assist certain small manufacturing enterprises with implementing new “industry 4.0” technology or related infrastructure, and the grant program encompasses AI.13
On an international level, 31 countries have passed AI legislation, and 13 more are debating AI laws.14 The European Union is currently considering significant legislation, the “AI Act,” which is a legal framework governing the sale and use of artificial intelligence in the EU. The official purpose of the legislation is to “ensure the proper functioning of the EU single market by setting consistent standards for AI systems across EU member states.” Practically, it is the first comprehensive regulation addressing the risks of artificial intelligence. The Act sets out a series of obligations and requirements that intend to foster innovation while also safeguarding the health, safety, and fundamental rights of EU citizens and beyond, and is expected to have an outsized impact on AI governance worldwide as it is the first major piece of legislation, so other bodies will look to the Act for guidance.
Regarding the rules for lawyers, in New York, the State Bar Association has started dipping in their toes in these unchartered waters via the publication of several articles in which they urge lawyers to err on the side of caution and adhere to their ethical obligations.15 For accountants, auditing firms are fully embracing AI in their audits, the Financial Accounting Standards Board (FASB) and Governmental Accounting Standards Board (GASB) are both looking at how investors process data using AI, and while standards have not been updated to account for AI in the accounting and auditing process, it is clearly on their radar.16
Regarding directors and officers, no single regulator or governing body necessarily informs the applicable corporate oversight as AI infiltrates a wide array of industries and responsibility for complying with a variety of AI-related rules and regulations will inevitably flow upwards to the boardroom. Several regulators have already taken steps, even if just in an advisory capacity, to indicate that AI is indeed front of mind, and additional regulatory initiatives are either on the horizon or the same rulebook remains in play. The CFPB, DOJ, EEOC, and FTC issued a joint statement in April 2023 stating that the agencies’ “[e]xisting legal authorities apply to the use of automated systems and innovative new technologies just as they apply to other practices.”17 In June 2020, FINRA issued a report entitled “Artificial Intelligence (AI) in the Securities Industry” which suggested, among other things, that firms review their model risk management frameworks, address data governance policies (including protection of financial and personal customer information), and establish reasonable supervisory policies.18 Additionally, during a July 2023 speech, the Chair of the SEC warned of AI’s impact on the securities industry which included five major risks – bias, conflicts of interest, financial fraud, privacy and intellectual property concerns, and the stability of the markets – which called for consideration, and possible regulation, from the SEC.19
With the present guidance being so limited and existing guidance being so young, uncertainties in practice are apparent.
With Uncertainty, Pitfalls, and the Potential for Stumbles
The risks with AI are plain: there are not that many experts that doubt the possibility for AI to wipe out humanity at some point in the future. The only other invention mankind ever created that could do the same was the hydrogen bomb, and how nuclear warfare was researched and tested is instructive on how AI may proceed: actors will push the limits of what is possible to stay ahead of their competitors. That is both at the state and private sector levels.
Therefore, AI is now effectively an arms race, not only on an international level between superpowers, but also on a local level, such as at professional service firms; whoever harnesses its power first will have a competitive advantage, potentially an insurmountable one, as machine learning technology can improve exponentially. Therefore, firms are jumping into the AI fray to stay ahead of the curve. To add fuel to the fire, many clients are inquiring into and pushing for firms’ AI use, even those firms who might not be ready to embrace the new technology. Pressure to stay ahead is percolating and will eventually hit full boil.
First, AI will be tested as it is now at the low responsibility functions, such as administrative functions. Still, there will be errors along the way. For example, health insurer Cigna has been named in two lawsuits alleging that it used an AI algorithm to screen claim submissions that didn’t match certain pre-set criteria thus allegedly resulting in the denial of hundreds of thousands of claims without a physician’s review.20 Cigna states that it uses the technology to verify that the codes on some of the most common, low-cost procedures are submitted correctly to help expedite physician reimbursement. However, Cigna’s use of technology to assist in expedition and verification – or even to just streamline paperwork – has now prompted allegations which implicate consumer protection laws at the state and federal level, as well as the nature and extent of required medical and professional oversight.
Next, people will be pressured to push the limit, and mistakes will be made. Mistakes have already happened, specifically with respect to lawyers using AI. On October 16, 2023, following what the AI company touted as the “first use of generative AI in a federal trial,” convicted Fugees singer Prakazrel Michel filed a motion for new trial, with one of his arguments being that his defense attorney “used an experimental AI program to write his closing argument, which made frivolous arguments, conflated the schemes, and failed to highlight key weaknesses in the Government’s case.”21
Additionally, on June 22, 2023, a U.S. judge “imposed sanctions on two New York lawyers who submitted a legal brief that included six fictitious case citations generated by an artificial intelligence chatbot, ChatGPT.”22 The lawyers asserted that they “made a good faith mistake in failing to believe that a piece of technology could be making up cases out of whole cloth.”23 Still, their error was well publicized and stands as a stark warning for attorneys who may be tempted to delegate substantive legal work to AI.
In the data privacy space, the implementation of AI has already provoked lawsuits alleging misuse of personal data by entities. In July 2023, a class action lawsuit asserted that Alphabet’s Google misused personal information in the training of its generative AI tools and Google’s “web scraping” violated individuals’ privacy rights (inter alia).24 A similar lawsuit was filed in June 2023 against OpenAI similarly alleging the misuse of personal data to “train” certain AI models.25 One question raised by these cases is what constitutes “private” data if it is publicly accessible by AI technology. Whether these cases will stand up to scrutiny remains to be seen, but they at least indicate a readiness by the plaintiffs bar to test the issues and another avenue of consideration with AI processing vast amounts of data.
Additionally, with any new technology, there will be the potential for fraud due to the combination of being new and not fully understood, and the lure of profit. The recent technological advancement of cryptocurrency was rife with fraud, as evidenced by the downfall of FTX.26 The professionals and companies that engage with any new technology may be left holding the bag after a major fraud is eventually uncovered. Further, AI may be used by the fraudsters themselves to propagate and conceal their fraud, and in that way, AI might be both a sword and a shield.
These are just the first instances involving the potential errors and omissions that will occur in the near future, as AI will be tested in more and more areas, leading to more and more claims.
What Professional Service Firms Need to Do
So, where to begin? AI is undoubtedly a powerful tool that offers immense benefits when harnessed responsibility. But it can be a double-edged sword, posing significant risks when handled recklessly.27 However, an ounce of prevention is worth a pound of cure. Before jumping on the AI bandwagon, professional service firms, at minimum, should: (1) sufficiently educate themselves on AI (e.g., training); (2) implement an internal AI policy (e.g., a commitment not to use AI tools without a responsible human supervisor overseeing the final output); and (3) stay informed (i.e., keep abreast of new developments and regulations, and share insights and experiences with others in the industry).28
In addition to this general guidance, firms should consider bespoke guidance that is tailored to their industry. For example, with respect to lawyers, a federal judge in Texas is now requiring that attorneys certify either that no portion of any filing will be drafted by AI or any language drafted by AI will be checked for accuracy by a human being.29 Judge Brantley Starr states that while AI platforms “are incredibly powerfl and have been used in the law,” ” legal briefing is not one of them.”30 This is because these plaforms are prone to hallucinations and bias.31 Regarding bias, Starr states:
- “While attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath. As such, these systems hold no allegiance to any client, the rule of law, or the laws and Constitution of the United States.”32
Another federal judge, Stephen Vaden, issued an order33 requiring lawyers to file both a notice that discloses which AI program was used and “the specific portions of the text that have been so drafted” and a certification that use of AI “has not resulted in the disclosure of any confidential or business proprietary information to any unauthorized party.”34 Consequently, lawyers should be cognizant of local rules and judicial preferences regarding AI use.
With respect to accountants, as they continue to expand the use of AI-powered accounting software, they will need to establish safeguards for the current use.35 For example, firms are already using AI to automate tedious tasks (e.g., bookkeeping, reconciling payments, and chasing unpaid invoices); to visualize data; to utilize it in predictive analytics; to use AI to break down financial jargon; and to detect fraud. However, firms must be cognizant of the fact that they should not just input all the financial data, rely solely on AI, or have unrealistic expectations about the upside of the technology, as there are clear downsides to placing reliance in such nascent technology.
With respect to directors and officers, absent new guidance from relevant regulators, boards and c-suite executives should approach AI risks with the same standards and duties expected of them and further refined since the Caremark decision.36 In other words, corporation should remain cognizant of their duty of oversight. This includes a complete knowledge of where AI is used within the company, how it impacts decision-making, and where any risks may exist. Implementation of a robust AI policy is advisable which details clear reporting lines on AI usage within the company and includes input from several internal sources including IT, legal, human resources, and others.37 Also, as noted above, government agencies have at least foreshadowed forthcoming regulations surrounding AI and boards need to remain informed of this developing regulatory environment. This could impact their duty to disclose how AI is used in the decision-making process and where it may impact financial performance.
Additionally, even if a professional firm has not yet adopted AI tools, there is a good chance that it may employ a third-party vendor that does, and therefore knowledge and mitigation of these risks is still relevant. For instance, cyber security vendors, digital forensic experts, and e-discovery firms have employed AI technology for several years in the context of network security and large data aggregation. The recent emergence of AI in the public eye will further hone the use of this technology by those firms. Just as professional firms have an obligation to responsibly oversee their use of technology – especially when it involves client or sensitive data – outside use of AI is no different and professional firms should remain cognizant of this chain of liability. Minimizing the exposure from third-party use of AI requires, at a minimum, knowing which third-parties are using these tools. Additional protections could include contractual indemnification provisions and confirming these vendors have the proper risk assessment policies in place should a potential error occur on their end.
This is just the tip of the iceberg regarding available resources for firms interested in AI.
What Insurers Need to Do
In light of the above, we suggest that insurers undertake the following five steps.
First, before claims arise, insurers, insureds, and brokers need to have an open dialogue regarding an insured’s AI use to service their clients.38 Said dialogue should address the insured’s AI policy, protocols, and controls.39 More specifically, insurers should seek support of a risk aware culture and diligent corporate partners, and insureds must “demonstrate that they have assessed and averted potential risks stemming from the use of AI.”40 Initially this may be an invasive process with pushback to the level of inquiry, so requiring or at least encouraging insured to have a consistent risk management policy will be imperative.
Second, insurers should remain aware of the fluctuations the market may make from year-to-year and be prepared to respond accordingly. As with any emerging risk, there are inevitably going to be some growing pains over the initial years until trends in coverage and exposure become clearer. While cyber insurance may be where one impulsively looks when discussing AI claims, the potential claims landscape implicates several lines (see infra). However, cyber insurance may provide a useful corollary for approaching AI in the underwriting context insofar as it is one of the more recent examples of a market weathering its own growing pains in response to an emerging risk. As a (simplified) overview of the recent evolution of the cyber insurance market:
- 2000s: An initial wariness of new exposures related to cyber risk with many carriers approaching forms, pricing, and limits with caution;
- 2010s: A significant expansion of the cyber market with increased coverage and limits. Premiums remained low and the underwriting process was lenient with little historical claims data to use or learn from; and
- 2020-present: Large losses emerged with frequency and severity. Premiums increased and underwriters enhanced their scrutiny.
While this is not a harbinger for how any AI market will take shape, there are similarities as AI is also a new risk with little, if any, underwriting reference data and an uncertain path forward. Insurers can hopefully avoid any speed bumps by learning lessons from the recent cyber insurance example. Namely, approach the underwriting process with scrutiny, gather as much historical data about an AI model’s use within the insured as possible, and remain flexible because, just as cyber exposures change quickly, AI models advance with speed so a risk analysis approach should remain dynamic as well.
Third, while itself risky, insurers should consider implementing AI during application process to help with claims trends, translate claims data, and identify potentially unforeseen risks. AI tools have been useful in the cyber underwriting process already as they help identify network security strength and potential exposure to threats within an entity’s entire system. AI technology to enhance underwriting across several other lines of business will undoubtedly emerge and can be a useful tool to supplement the traditional process. The technology itself is focused on data-driven aggregation and prediction, so AI could naturally lend itself to increasing the efficiency and accuracy of risk assessments and pricing strategies. This will involve some potential apprehension as the applications of these tools are refined and adopted over the near future, but investing time to explore the current AI research and assessing current systems will at least keep insurers at the forefront of the conversation surrounding the advancing technology.
Fourth, insurers need to know what policies might be implicated by AI-related claims. Although it might feel natural for a cyber liability policy to respond, coverage will vary from claim-to-claim and could very likely involve a variety of professional liability policies. The nature of AI’s main commodity (i.e. vast amounts of electronic data) could certainly trigger a cyber policy to the extent that the AI system itself is compromised resulting in a digital asset loss, business interruption, and the need for a breach response team. However, when the error occurs in the human oversight of that AI technology, E&O policies will not be immune. Allegations of negligent reliance on AI-generated legal advice would involve an LPL policy. Alleged discrimination and bias in the AI-assisted hiring process would involve an EPL policy. Failure to disclose AI-related usage or the failure to implement adequate corporate safeguards would involve a D&O policy. In many of these examples, there would be neither a “breach” to implicate a cyber policy nor an “error” in the AI technology itself which may have performed exactly how it was programmed and instructed to by the professional.
Finally, as there will eventually be claims, keeping track of claims data as well as staying on top of how courts treat AI-related claims will be important, as courts will also likely be blazing new trails with AI-related trials.
Conclusion
AI provides the prospect of a species altering technology such as the wheel, the printing press, vaccines, and the steam engine. In looking back at all of these inventions, the unforeseen consequences of the invention were difficult to see at the time. The same is with AI, and the bright future ahead will similarly encounter the same bumps in the road. There will be growing pains for both insurers and their insureds, and each will learn from their mistakes. In order to minimize such growing pains, insurers should understand that the risks will initially increase with AI, perhaps significantly, and therefore prepare accordingly by having as much information about their insureds as possible.
Footnotes
1 Inherent in all of this is the assumption that AI will continue to advance at a pace akin to Moore’s Law of microchips, which was coined in 1965 that chips would double in speed every two years while the price would similarly drop. For over 50 years, that law has held true, as computers have become much more powerful, and AI development may follow a similar path.
2 https://nysba.org/artificial-intelligence-will-transform-legal-profession/
3 https://legal.thomsonreuters.com/blog/how-law-firms-can-use-ai-to-level-up-theirbusiness/#:~:text=Virtually%20all%20law%20firms%20utilize,text%2C%20graphics%2C%20and%20documents
4 Conversely, AI may be used by the fraudsters themselves to conceal the fraud from gatekeepers. https://tax.thomsonreuters.com/blog/how-do-different-accounting-firms-use-ai/#:~:text=Accounting%20firms%20of%20all%20sizes,on%20higher%2Dvalue%20advisory%20role
5 Released in July of this year, the most recent ‘Mission: Impossible’ villain was not an evil megalomaniac, but instead, rogue AI which every country on earth was fighting to obtain. https://fortune.com/2023/11/01/biden-ai-executive-order-tom-cruise-mission-impossible-movie/
6 https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
7 https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
8 https://www.cnbc.com/2023/10/30/biden-unveils-us-governments-first-ever-ai-executive-order.html
9 https://apnews.com/article/schumer-artificial-intelligence-elon-musk-senate-efcfb1067d68ad2f595db7e92167943c
10 https://newsus.cgtn.com/news/2023-11-10/Biden-s-executive-order-on-AI-gets-mixed-reviews-1oBJsnY8rTy/index.html
11 https://ndcan.org/house-bill-1361
12 https://www.cga.ct.gov/asp/CGABillStatus/cgabillstatus.asp?selBillType=Bill&bill_num=SB1103
13 https://commerce.maryland.gov/Documents/Maryland.Manufacturing.M4.Application.pdf
14 https://www.csis.org/blogs/strategic-technologies-blog/ai-regulation-coming-what-likely-outcome#:~:text=31%20countries%20have%20passed%20AI,are%20subject%20to%20different%20regulations
15 https://nysba.org/using-ai-in-your-practice-proceed-with-caution/https://nysba.org/using-ai-in-your-practice-proceed-with-caution/
16 https://tax.thomsonreuters.com/news/u-s-accounting-rulemakers-studying-how-investors-use-artificial-intelligence-to-consume-financial-data/
17 https://www.ftc.gov/system/files/ftc_gov/pdf/EEOC-CRT-FTC-CFPB-AI-Joint-Statement%28final%29.pdf
18 https://www.finra.org/rules-guidance/key-topics/fintech/report/artificial-intelligence-in-the-securities-industry
19 https://www.sec.gov/news/speech/gensler-isaac-newton-ai-remarks-07-17-2023
20 Kisting-Leung et al v. Cigna Corporation et al, E.D. CA (July 24, 2023); Van Pelt et al v. The Cigna Group et al., D. Conn, (August 25, 2023).
21 https://fingfx.thomsonreuters.com/gfx/legaldocs/klvyzjemypg/frankel-usvmichel–newtrialbrief.pdf
22 https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22/
23 Id.
24 J.L. v. Alphabet Inc., U.S. District Court for the Northern District of California, No. 3:23-cv-03440.
25 PM v. OpenAI LP, N.D. Cal., No. 3:23-cv-03199.
26 The Department of Financial Protection & Innovation has a section of its website dedicated to “crypto scams”: https://dfpi.ca.gov/crypto-scams/
27 https://www.wtwco.com/en-us/insights/2023/10/navigating-ai-risks-in-professional-liability
28 Id.
29 https://www.txnd.uscourts.gov/judge/judge-brantley-starr
30 I.e. they make up stuff. Id.
31 Id.
32 Id.
33 https://www.cit.uscourts.gov/sites/cit/files/Order%20on%20Artificial%20Intelligence.pdf
34 https://www.reuters.com/legal/transactional/another-us-judge-says-lawyers-must-disclose-ai-use-2023-06-08/
35 https://medium.com/@Raedan_LDN/how-to-use-ai-in-accounting-the-dos-and-donts-5a5c2725b72d
36 In re Caremark Int’l Inc. Derivative Litig., 698 A.2d 959 (Del. Ch. 1996).
37 See id at 971 (Citing an “utter failure to attempt to assure a reasonable information and reporting system exists” as grounds for establish the lack of good faith that is a necessary condition to liability).
38 https://cms.law/en/gbr/publication/artificial-intelligence-consequences-for-professional-indemnity-insurers-when-ai-fails-to-perform
39 https://www.wtwco.com/en-us/insights/2023/10/navigating-ai-risks-in-professional-liability
40 https://www.wtwco.com/en-us/insights/2023/10/navigating-ai-risks-in-professional-liability