New York On the Ethics of Expensing Credit Card Processing Fees to Clients
More AI Ethics Guidance Arrives With Pennsylvania Weighing In

The GenAI Courtroom Conundrum

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

The GenAI Courtroom Conundrum

In the wake of ChatGPT-4's release a year ago, there has been a notable uptick in the use of generative artificial intelligence (AI) tools by lawyers when drafting court filings. However, with this embrace of cutting-edge technology there has been an increase in well-publicized incidents involving fabricated case citations.

Here is but a sampling of incidents from earlier this year:

  • A Massachusetts lawyer faced sanctions for submitting multiple memoranda riddled with false case citations (2/12/24).
  • A British Columbia attorney was reprimanded and ordered to pay opposing counsel's costs after relying on AI-generated "hallucinations" (2/20/24).
  • A Florida lawyer was suspended by the U.S. District Court for the Middle District of Florida for filing submissions based on fictitious precedents (3/8/24).
  • A pro se litigant's case was dismissed after the court called them out for submitting false citations for the second time (3/21/24).
  • The 9th Circuit summarily dismissed a case without addressing the merits due to the attorney's reliance on fabricated cases (3/22/24).

Judges have been understandably unhappy with this trend, and courts across the nation have issued a patchwork of orders, guidelines, and rules regulating the use of generative AI in their courtrooms. According to data collected by RAILS (Responsible AI in Legal Services) in March, there were 58 different directives on record at that time.

This haphazard manner of addressing AI usage in courts is problematic. First, it fails to provide the much-needed consistency and clarity. Second, it evinces a lack of understanding about the extent to which AI has been embedded within many technology products used by legal professionals for many years now — in ways that are not always entirely transparent to the end user.

Fortunately, as this technology has become more commonplace and is better understood, some of our judicial counterparts have recently begun to revise their perspective, offering newfound approaches to AI-supported litigation filings. They have wisely decided that rather than over-regulating the technology, our profession would be better served if there was reinforcement of existing rules that require due diligence and careful review of all court submissions, regardless of the tools employed.

For example, earlier this month, the Fifth Circuit U.S. Court of Appeals in New Orleans reversed course and chose not to adopt a proposed rule that would have required lawyers to certify that if an AI tool was used to assist in drafting a filing, all citations and legal analysis had been reviewed for accuracy. Lawyers who violated this rule could have faced sanctions and the risk that their filings would be stricken from the record. In lieu of adopting the rule, the Court advised lawyers to ensure “that their filings with the court, including briefs…(are) carefully checked for truthfulness and accuracy as the rules already require.”

In another case, a judge for the Eleventh Circuit U.S. Court of Appeals used ChatGPT and other generative AI tools to assist in writing his concurrence in Snell v. United Specialty Insurance Company, No. 22-12581. In his concurrence he explained that he used the tools to aid in his understanding of what the term “landscaping” meant within the context of the case. The court was tasked with determining whether the installation of an in-ground trampoline constituted “landscaping” as defined by the insurance policy applicable to a negligence claim. Ultimately he found that the AI chatbots did, indeed, provide the necessary insight, and referenced this fact and in the opinion.

In other words, the time they are a’changin’. The rise of generative AI in legal practice has brought with it significant challenges, but reassuringly, the legal community is adapting.  The focus is beginning to shift from restrictive regulations towards reinforcing existing ethical standards, including technology competence and due diligence. Recent developments suggest a balanced approach is emerging—acknowledging AI's potential while emphasizing lawyers' responsibility for accuracy. This path forward strikes the right balance between technological progress and professional integrity, and my hope is that more of our esteemed  jurists choose this path.

Nicole Black is a Rochester, New York attorney, author, journalist, and the Head of SME and External Education at MyCase legal practice management software and LawPaypayment processing, AffiniPay companies. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].