Legal Ethics in the AI Era: The NYC Bar Weighs In
September 10, 2024
Here is my recent Daily Record column. My past Daily Record articles can be accessed here.
****
Legal Ethics in the AI Era: The NYC Bar Weighs In
Since November 2022, when the release of ChatGPT was first announced, many jurisdictions have released AI guidance. In this column, I’ve covered the advice rendered by many state ethics committees, including California, Florida, New Jersey, Michigan, New York, Pennsylvania, Kentucky, the American Bar Association, and most recently, Virginia.
Now, the New York City Bar Association has entered the ring, issuing Formal Ethics Opinion 2024-5 on August 7th. The New York City Bar Association Committee of Professional Ethics mirrored the California Bar’s approach and provided general guidelines in a chart format rather than proscriptive requirements. The Committee explained that “when addressing developing areas, lawyers need guardrails and not hard-and-fast restrictions or new rules that could stymie developments.” Instead, the goal was to provide assistance to New York attorneys through “advice specifically based on New York Rules and practice…”
Regarding confidentiality, the Committee distinguished between “closed systems” consisting of a firm’s “own protected databases,” like those typically provided by legal technology companies, and systems like ChatGPT that share inputted information with third parties or use it for their own purposes. Client consent is required for the latter, and even with “closed systems,” confidentiality protections within the firm must be maintained. The Committee cautioned that the terms of use for a generative AI tool should be reviewed regularly to ensure that the technology vendor is not using inputted information to train or improve its product in the absence of informed client consent.
Turning to the duty of technology competence, the Committee opined that when choosing a product, lawyers “should understand to a reasonable degree how the technology works, its limitations, and the applicable [T]erms of [U]se and other policies governing the use and exploitation of client data by the product.” Also emphasized was the need to avoid delegating professional judgment to these tools and to consider generative AI outputs to be a starting point. Not only must lawyers ensure that the output is accurate, but they should also take steps to “ensure the content accurately reflects and supports the interests and priorities of the client in the matter at hand.”
The duty of supervision was likewise addressed, with the Committee confirming that firms should have policies and training in place for lawyers and other employees in the firm regarding the permissible use of this technology, including ethical and practical uses, along with potential pitfalls. The Committee also advised that any client intake chatbots used by lawyers on their websites or elsewhere on behalf of the firm should be adequately supervised to avoid “the risk that a prospective client relationship or a lawyer-client relationship could be created.”
Not surprisingly, the Committee required lawyers to be aware of and comply with any court orders regarding AI use. Another court-related issue addressed was AI-created deepfakes and their impact on the judicial process. According to the Committee’s guidance, lawyers must screen all client-submitted evidence to assess whether it was generated by AI, and if there is a suspicion “that a client may have provided the lawyer with Generative AI-generated evidence, a lawyer may have a duty to inquire.”
Finally, the Committee turned to billing issues, agreeing with other jurisdictions that lawyers may charge for time spent crafting inquiries and reviewing output. Additionally, the Committee explained that firms may not bill clients for time saved as a result of AI usage and that firms may want to explore alternative fee arrangements in order to stay competitive since AI may significantly impact legal pricing moving forward. Last but not least, any generative AI costs should be disclosed to clients, and any costs charged to clients “should be consistent with ethical guidance on disbursements and should comply with applicable law.”
The summary above simply provides an overview of the guidance provided. For a more nuanced perspective, you should read the opinion in its entirety. Whether you’re a New York lawyer or practice elsewhere, this guidance is worth reviewing and provides a helpful roadmap for adoption as we head into an AI-led future where technology competence is no longer an option. Instead, it is an essential requirement for the effective and responsible practice of law.
Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].