Florida Bar Hands Down Opinion on AI Ethics
February 12, 2024
Here is my recent Daily Record column. My past Daily Record articles can be accessed here.
*****
Florida Bar Hands Down Opinion on AI Ethics
We’re heading into the second year of generative artificial intelligence (GAI) availability, and if you haven’t been paying attention to this cutting-edge tool, there’s no better time than the present. GAI is advancing at an exponential rate and is rapidly being incorporated into the software tools you use every day. From legal research and contract drafting to law practice management and document editing, GAI is everywhere, and avoiding it is no longer an option.
This is especially so now that legal ethics committees across the country are rising to the challenge and issuing GAI guidance. The first to do so was the State Bar of California’s Committee on Professional Responsibility and Conduct, which handed down a thorough roadmap for ethical generative AI adoption in law firms in November 2023. As I discussed in another article, the guidance provided was extensive and covered many different ethical issues including technology competence, confidentiality, and the requirement of candor, both with legal clients and courts.
More recently, both Florida and New Jersey released guidance, with Florida issuing Ethics Opinion 24-1 on January 19th, and New Jersey handing down preliminary guidelines on January 24th. Below I’ll break down the Florida opinion, and in my next article will address the New Jersey guidance.
In Florida Bar Ethics Opinion 24-1 (online: https://www.floridabar.org/etopinions/opinion-24-1/), the Board Review Committee on Professional Ethics concluded that lawyers may use GAI, but, of course, must do so ethically. The Committee addressed a broad range of ethics issues in the opinion, ranging from confidentiality and pricing to supervision and lawyer advertising.
At the outset the Committee explained that GAI tools can “create original images, analyze documents, and draft briefs based on written prompts,” but in doing so can hallucinate, which means providing “inaccurate answers that sound convincing.” As a result, the Committee cautioned that all output must be carefully reviewed for accuracy.
Next, the Committee reviewed GAI in the context of confidentiality and advised that lawyers who use GAI must have a thorough understanding of how the GAI technology handles data input and whether it uses input to train the GAI system.
According to the Committee, a key way to do that is to vet GAI providers in much the same way as cloud computing providers. Take steps to ensure that: 1) the provider has an enforceable obligation to preserve data confidentiality and security, 2) the provider will notify the customer in the event of a breach or service of process requiring the production of client information, 3) Investigate the provider’s reputation, security measures, and policies, including any limitations on the provider’s liability; and 4) Determine whether the provider retains submitted information submitted before and after the discontinuation of services, or otherwise asserts proprietary rights to the information.
When it comes to client consent, the Committee noted that one way to mitigate confidentiality concerns would be to use in-house GAI tools like the legal-specific tools I referenced above, “where the use of a generative AI program does not involve the disclosure of confidential information to a third-party” in which case, “a lawyer is not required to obtain a client’s informed consent pursuant to Rule 4-1.6.”
Next, the Committee reiterated the obligation to carefully review the output of any GAI tool, and that lawyers must ensure that all firm users are instructed to do this as well. The Committee cautioned that lawyers should not delegate work to a GAI tool that “constitute(s) the practice of law such as the negotiation of claims” nor should GAI be used to create a website intake chatbot that could inadvertently “provide legal advice, fail to immediately identify itself as a chatbot, or fail to include clear and reasonably understandable disclaimers limiting the lawyer’s obligations.”
Another key topic addressed was legal fees for GAI usage. The Committee explained that clients should be informed, preferably in writing, of the lawyer’s intent to charge a client the actual cost of using generative AI” and that lawyers can charge for the “reasonable time spent for case-specific research and drafting when using generative AI.”
Importantly, the Committee acknowledged that learning about GAI is part of a lawyer’s duty of technology competence, and explained that lawyers are obligated to develop competency in their use of new technologies like GAI, including their risks and benefits. However, clients cannot be charged for “time spent developing minimal competence in the use of generative AI.” In other words, you can’t charge clients for the cost of maintaining your ethical duty of technology competence.
The opinion also covers other ethics issues, so make sure to read it in its entirety. I don’t think certain requirements will withstand the test of time, such as the need to notify clients that you plan to use GAI. In the past, when ethics committees have required client notification as it relates to technology usage, that obligation has faded over time as the technology became commonplace. GAI will follow the same course, but it will happen much faster.
That’s why these ethics opinions are so important: they provide lawyers with a roadmap for the ethical use of these tools. That being said, I don’t think these GAI opinions are technically necessary. Earlier opinions provide sufficient guidance for ethical technology adoption that can be easily applied to GAI. However, from a practical standpoint, these opinions serve a purpose: they provide a framework for moving forward and encourage lawyers to embrace GAI and the future that it brings. This means you have no excuse: there’s no better time than now to become familiar with GAI and its significant potential.
Nicole Black is a Rochester, New York attorney, author, journalist, and the Head of SME and External Education at MyCase legal practice management software, an AffiniPay company. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].