Have you used, or do you intend to use, or will you be asked to use some form of AI (Artificial Intelligence) in your practice? If so, maybe you need to put a written policy in place that outlines when, where, and how it is appropriate to engage with such technology. There are well-documented cases where AI has been employed by lawyers to draft briefs, articles, or to perform research—in full or in part. Some cases have been met with, let’s say, questionable outcomes (lawyers sanctioned in a personal injury lawsuit where briefs included citations to non-existent opinions and fake quotes). Despite concerns about an over-reliance on such tools, the train may have left the station as LexisNexis and Westlaw have rolled out their “AI Powered” legal research platforms (Lexis+AI™ and Westlaw Edge). Clients may expect their lawyers to find efficiencies from those tools.
Ethical Issues
In a presentation at the TIPS Cybersecurity and Data Privacy Conference, panelists discussed the ethics of using AI. The panelists, Alyssa Johnson (Barron & Newberger), John Stephens, and John Hendricks (both from Hendricks Law) analyzed how generative AI tools implicate key ethical duties: duty of confidentiality, duty of competency, and duty of diligence. (“Generative AI,” as used here, is a program that generates texts, images, and other data using models from learned data, patterns, or structure, e.g., ChatGPT, which responds to requests from users to generate its own text based on content provided.) The panel highlighted some real-world examples where courts are now insisting that lawyers disclose their use of AI (the scope and specifics of such disclosures are to be determined in many jurisdictions). In addition, the panel stressed other essential considerations: disclosures to clients regarding the use of AI, plus any related costs or potential fee adjustments; eliminating bias; validation and correction of results; compliance with relevant jurisdictions; oversight and understanding of who is using AI and how it is being used. To that last consideration, it is becoming clear that firms likely will need to supplement or create guidelines that address how their lawyers are to use and benefit from generative AI.
Need for New Guidance
Thomson Reuters, the parent entity for Westlaw, reports that while regulation is in its early stages, the focus has been on the privacy rights of individuals, particularly consumer protection issues and the right to opt-out. Thomson Reuters Institute, Legalweek 2024: Current US AI regulation means adopting a strategic—and communicative—approach. Some in-house corporate departments have ended up banning the use of ChatGPT outright as the industry awaits more definitions about appropriate controls.
Firms have well-established policies and procedures relating to conflicts checks, internet and email use, social media content, remote access, and other related HR or codes of conduct policies. These policies are informed by client obligations as well as the ethical and statutory oversight of the practice of law. Just as courts have set down electronic discovery, filing, and communications policies, so too will jurisdictions follow suit when it comes to monitoring and policing attorneys’ use or potential abuse of generative AI. Apart from privacy and confidentiality, a lack of proper oversight can also lead to errors and omissions. Lawyers also may consider the risks and benefits of sharing what traditionally would have been their proprietary work product with a technology that is open to the internet.
From briefs, memoranda, or standard motions to client updates, opinions, or newsletters, firms may have years’ worth of data and content that makes them stand out to their clients or an industry. It is foreseeable that pressures to produce advice or advocacy in the most efficient and effective way possible could lead to incorporating unreliable concepts or sources and meanwhile, sharing your content outside of your presumably secure environment has risks. Remember, the technology is based on the user “prompting” the program with text, and then the software follows up with a response incorporating what the user said, drawing on terabytes of data to find the next most likely series of words. Once prompted, depending upon the technology, the original content has been shared outside of a firm’s confidential or secure environment, which may be especially problematic if the lawyer also shared client-generated content (even if anonymized, some fact patterns lend themselves to easy identification as some have learned in the advertising context). Not to mention that training and overseeing younger lawyers on these finer points presents an additional layer of risk management.
Updated Guidelines
What updates or new guidelines should firms turn to in reconciling the dawn of this new era with their traditional way of operating? Just as firms and bar groups train new lawyers on confidentiality and fiduciary duties, the time has come to reframe these issues with AI in mind. Unsurprisingly, the State Bar of California weighed in with “guidelines for generative AI use.” Updated firm policies could include some “easy” fixes from those guidelines:
- Confidentiality:
- Lawyers must not input any confidential client information into any generative AI solution that lacks adequate confidentiality and security protections.
- If a client consents or requests the use of generative AI, review the terms to ensure content will not be used or shared by the AI product in any manner for any purpose.
- Competency and Diligence:
- Before using generative AI, lawyers should understand to a reasonable degree how the technology works and its limitations.
- Lawyers must scrutinize and critically analyze output for accuracy and bias and make any corrections where necessary.
- Compliance with law: Lawyers must ensure compliance with relevant laws and regulations, applicable to attorneys, clients, the content, or the output.
- Supervisory: Even if directed by a client or supervisor, subordinate lawyers may not use generative AI in a manner that violates professional obligations.
- Client communications: Disclose to clients the novelty of the technology, risks associated with its use, and the scope of representation and address the client’s sophistication (where a client has specific knowledge of the type of AI, address where AI may complicate advocacy or present limitations, especially where the client expresses a preference for using AI).
- Candor to the tribunal: Review and correct auto-generated citations or edits. Comply with local rules.
- Fees: Bill for time spent, not time the lawyer would have spent absent AI. Engagement letters should address any impact on fees or costs.
- Discrimination: Be aware of bias risks and correct, and eliminate anything contradictory to firm or court practices.
As noted, for a firm’s own proprietary interests, additional guidance would include:
- Proprietary Content: Do not share any materials or content, client or firm-generated, outside of the firm’s environment if it could mean that third parties could use or modify the content or identify the source.
Some may feel more comfortable issuing an AI ban first and modifying their use policies once the regulatory landscape has more fully developed. At the very least, lawyers and firms need to be aware that there likely will be a push to utilize such advances where clients, courts, and parties try to capture the benefits. The onus, as ever, will be on counsel to assess the risks and avoid the pitfalls.
Original article published by the American Bar Association Tort Trial and Insurance Practice Section Law Journal Spring 2024. Permission to republish granted by the American Bar Association.
Author: Peggy Reetz
Peggy is a Partner at Mendes & Mount, LLP where she specializes in cybersecurity and data privacy issues.