ethics

New York On the Ethics of Expensing Credit Card Processing Fees to Clients

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

New York On the Ethics of Expensing Credit Card Processing Fees to Clients

One of the key business challenges lawyers face is getting paid. When cash or check were the only choices, there was little payment flexibility available for law firms or their clients. Today, things have changed. Most billing and law practice management software programs have built-in features that streamline the billing process and allow law firms to offer payment convenience in ways never before possible. From payment plans to credit cards and even “Pay Later” legal fee loan options, lawyers and their clients have more options than ever.

Regardless of the payment method, lawyers must comply with the ethics requirements surrounding legal fees. As new payment methods become available, ethics committees often weigh in to ensure that lawyers have sufficient guidance when accepting alternative payment methods. 

One area that has received considerable attention from regulators over the years is credit card payments, which are now commonly accepted in most law firms. Despite their widespread application, novel ethics issues surrounding credit card payments occasionally arise, which require input, such as the recent issue addressed by the New York State Bar Association Committee on Professional Ethics in Ethics Opinion 1258A.

At issue was whether a lawyer may pass on merchant processing fees to clients as an expense. At the outset, the Committee acknowledged that accepting credit cards as payment has long been permissible in New York provided that 1) the legal fee is reasonable, 2) client confidentiality is protected, 3) the credit card company’s actions do not impact client representation, 4) the client is advised before the charge is incurred and has the chance to dispute any billing errors, and 5) any disputes regarding the legal fee are handled according to the fee dispute resolution program outlined in 22 N.Y.C.R.R. Part 137.

Next, the Committee turned to the issue of expensing credit card fees to clients, explaining that excessive fees or expenses are prohibited by Rule 1.5(a) of the New York Rules of Professional Conduct (Rules). 

According to the Committee, this prohibition applies to a merchant processing fee since it is considered an “expense” under the Rules. As long as lawyers avoid charging excessive fees as defined in Rule 1.5(a), it is permissible to pass on merchant processing fees incurred when legal fees are paid by credit card to clients as expenses.  

Next, the Committee turned to Ethics Opinion 1050 from 2015, which addressed credit card payments made in the context of advance retainers. In that opinion, the Committee permitted the inquiring lawyer to, “as an administrative convenience, charge a client a nominal amount over the actual processing fees imposed on the lawyer by a credit card company in connection with the client’s payment by credit card of the lawyer’s advance payment retainer.”  

Doing so was conditioned upon 1) notifying the client and obtaining consent and 2) ensuring the additional fee was nominal and the total amount of the advance payment retainer, the processing fees, and the convenience fee were likewise reasonable under the circumstances.

The Committee then turned to the case at hand and applied the same principles, concluding that when legal fees beyond the initial retainer are paid by credit card, a “lawyer may pass on a merchant processing fee to clients who pay for legal services by credit card provided that both the amount of the legal fee and the amount of the processing fee are reasonable, and provided that the lawyer has explained to the client and obtained client consent to the additional charge in advance.”

In 2024, lawyers have unprecedented flexibility in payment methods. However, a thorough understanding of your ethical obligations is essential, especially when your firm broadens client payment options. This opinion is an important reminder to carefully navigate ethics rules when accepting credit card payments from clients, especially as technology continues to evolve and impact how law firms do business. 

Nicole Black is a Rochester, New York attorney, author, journalist, and the Head of SME and External Education at MyCase legal practice management software and LawPaypayment processing, AffiniPay companies. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].

 


ABA Weighs in on Listserv Ethics

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

ABA Weighs in on Listserv Ethics

At first glance, you might assume that this article was published in the early 1990s and was reprinted by mistake. If so, you’d be wrong. The truth is, the American Bar Association (ABA), in its infinite wisdom, decided that May 2024—in the midst of the generative AI technology revolution—was the ideal time to address the ethical issues presented when lawyers interact on listservs, an email technology that has existed since 1986.

So hold on to your hats, early technology adopters, while we break this opinion down so that you have the ethics guidance needed to appropriately interact when using technology that has been around longer than the World Wide Web.

In Formal Opinion 511, the ABA considered whether lawyers interacting on listservs who sought advice regarding a client matter was “impliedly authorized to disclose information relating to the representation of a client or information that could lead to the discovery of such information.”

At the outset, the ABA Standing Committee on Ethics and Professional Responsibility explained that the duty of confidentiality prohibits lawyers from disclosing any information related to a client’s representation, no matter the source. Protected information is not limited to “communications protected by attorney-client privilege” and includes clients’ identities and even publicly available information like transcripts of proceedings.

Next, the Committee acknowledged that generally speaking, lawyers are permitted to consult with an attorney outside of their firm regarding a matter and may reveal information related to the representation in the absence of client consent, but only if the “information is anonymized or presented as a hypothetical and the information is revealed under circumstances in which…the information will not be further disclosed or otherwise used against the consulting lawyer’s client.” In addition, the information shared may not be privileged and must be non-prejudicial.

However, according to the Committee, this implied authority to disclose anonymized or hypothetical case-related information to other attorneys is limited to professional consultations with other lawyers. This is because “participation in most lawyer listserv discussion groups is significantly different from seeking out an individual lawyer or personally selected group of lawyers practicing in other firms for a consultation about a matter.”

The Committee noted that listservs can consist of unknown participants, and posts can be forwarded or otherwise shared and viewed by non-participants, including other lawyers representing a party in the same matter. As a result, “posting to a listserv creates greater risks than the lawyer-to-lawyer consultation.”

Given the risks, in the absence of client consent, lawyers are ethically prohibited from posting anything to a listserv that could reasonably be linked to an identifiable client, whether the intent is to obtain assistance in a case or otherwise engage on the listserv. 

Listserv use is not forbidden, however, and lawyers can interact in other ways. For example, asking more general questions, sharing news updates, requesting access to a case, or seeking a form or document template.

Finally, the Committee expanded the opinion’s rationale to other types of interactions. The Committee opined that the “principles set forth in this opinion…apply equally when lawyers communicate about their law practices with individuals outside their law firms by other media and in other settings, including when lawyers discuss their work at in-person gatherings.”

That single line, easily missed at the beginning of the opinion, ensures that the Committee’s conclusions stand the test of time. 

This opinion on listserv ethics is a necessary reminder of the importance of confidentiality in all lawyer interactions, even when using long-established technologies like listservs. While the ABA’s timing could have been better, this advisory opinion is worth a thorough read. Take a look and then keep the Committee’s advisements in mind as you interact with other lawyers online and off. 

Nicole Black is a Rochester, New York attorney, author, journalist, and the Head of SME and External Education at MyCase legal practice management software and LawPaypayment processing, AffiniPay companies. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].

 


New York Bar Association New AI Guidance: Part 2

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

New York Bar Association New AI Guidance: Part 2

In last week’s column, I shared news of the recent artificial intelligence (AI) guidance handed down by the New York State Bar Association’s AI Task Force in the “Report and Recommendations of the New York State Bar Association Task Force on Artificial Intelligence.”

In this lengthy report, the Task Force addressed a wide range of issues, including 1) an in-depth examination of the evolution of AI and GenAI, 2) its risks and benefits, 3) how it impacts society and the practice of law, and 4) ethics guidelines and recommendations for lawyers who use these tools.

Last week we discussed several areas of focus from the report, including data preservation issues and the impact of deepfake evidence on the judicial process. Today we’ll dive into the legal ethics guidance provided in the report.

First, the Task Force considered Rule 1.2 and the scope of representation. It explained that if you will be using generative AI in the course of handling a matter, you should consider including a statement to that effect in your engagement letter, which your client should acknowledge receiving. 

The following sample language was suggested: “Use of Generative AI: While representing you, we may use generative AI tools and technology to assist in legal research, document drafting, and other legal tasks. This technology enables us to provide more efficient and cost-effective legal services. However, it is important to note that while generative AI can enhance our work, it is not a substitute for the expertise and judgment of our attorneys. We will exercise professional judgment in using AI-generated content and ensure its accuracy and appropriateness in your specific case.”

Diligence pursuant to Rule 1.3 was also addressed. The Task Force emphasized that as part of due diligence, lawyers must determine the benefits and drawbacks of using AI tools for a specific case. 

Next, the Task Force turned to Rule 1.4 and cautioned that lawyers should not rely on AI tools to replace client communication. Certainly, lawyers can use the tools to “assist with generating documents or responses” but the duty to ensure an open and clear line of communication lies with attorneys who must “maintain direct and effective communication with…client(s) and not rely solely on content” created by AI.

The impact of AI usage on legal fees under Rule 1.5 was also considered. The Task Force emphasized that AI-driven efficiency gains - or unrealized gains resulting from the failure to use this technology - should be taken into account when determining what constitutes a “reasonable” fee. Additionally, engagement letters should include mention of any surcharges billed to clients: “If you will add a ‘surcharge’ (i.e., an amount above actual cost) when using specific Tools, then you should clearly state such charges in your engagement letter, provided that the total charge remains reasonable.”

The Task Force also advised that the confidentiality requirements of Rule 1.6 apply when using AI software. Lawyers should obtain client consent to use these tools and have a continuing duty to ensure that providers will protect confidential data and keep each client’s data segregated and protected.

Next, the Task Force considered Rules 5.4 and 5.5, which require lawyers to exercise personal independence and prohibit them from relying solely on AI-produced output without reviewing and carefully considering the results. AI tools “should augment but not replace your legal work.”

The Task Force opined that because AI may increase efficiency significantly, it also has the potential to increase the “amount and scope of the pro bono legal services” that lawyers can provide. As such, under “the application of Rule 6.1, you are encouraged to use the Tools to enhance your pro bono work.”

Another ethics issue that the Task Force discussed was advertising and the requirements of Rule 7.1, which mandate lawyers to carefully oversee all AI-created content publicly posted on their behalf. Attorneys must ensure that it is “truthful and non-deceptive.”

Finally, the Task Force cautioned that Rule 7.3 requires avoiding AI software usage when generating “phone calls, chat board posts or other forms of solicitation” whether made by the attorney or someone else on their behalf.

This guidance should serve as encouragement to use these tools, not discouragement. The Task Force has provided comprehensive guidance to assist in integrating these tools into law practices. Rather than replacing the nuanced judgment of attorneys, AI improves their ability to represent clients effectively by increasing the efficiency and scope of legal services delivered. 

In other words, take advantage of all these tools have to offer. Follow the Task Force’s recommendations, embrace your duty of technology competence, and begin learning about generative AI today.

Nicole Black is a Rochester, New York attorney, author, journalist, and the Head of SME and External Education at MyCase legal practice management software and LawPay payment processing, AffiniPay companies. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


New York Bar Association New AI Guidance: Part 1

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

*****

New York Bar Association New AI Guidance: Part 1

It’s been just over a year since OpenAI unleashed ChatGPT Plus powered by GPT-4 into the world, and things haven’t been the same since. From deepfake videos and sophisticated scams powered by generative artificial intelligence (GenAI) tools to lawyers making headlines for citing fake case citations obtained from GenAI requests, this technology is dramatically altering the way that lawyers - and swindlers - achieve their goals. 

Bar association ethics committees have been trying to keep up, with California, Florida, and New Jersey leading the way, having issued GenAI guidance for lawyers. Most recently, New York jumped into the fray on April 6, releasing the lengthy 91-page “Report and Recommendations of the New York State Bar Association Task Force on Artificial Intelligence.” 

This far-reaching report provides: 1) an in-depth examination of the evolution of AI and GenAI, 2) its risks and benefits, 3) how it impacts society and the practice of law, and 4) ethics guidelines and recommendations for lawyers who use these tools.

Today we’ll focus on broader issues addressed in the report that impact the legal profession and courts and will cover the legal ethics guidance in the next column.   

One reason for the report’s breadth is that it isn’t limited to the ethics issues presented by GenAI usage in the legal profession, and instead touches on the many other legal implications that arise when lawyers incorporate these tools into their daily workflows. According to the Committee, there are a number of notable issues attorneys should consider when utilizing ChatGPT and other similar generative AI tools. 

First, when choosing a provider, a thorough understanding of licensing information, the terms of use, and applicable privacy policies is required. Lawyers should also determine what happens to data input into a GenAI tool. Will the provider use that data to train or refine their AI model? Will the provider allow third parties to access that data to train or improve their AI model? Are data inputs protected and stored in an encrypted, closed environment that anonymizes queries and prevents third parties, including opponents and adversaries, from accessing them?

The Committee also addressed the data preservation implications triggered when legal professionals use GenAI to prepare a case for litigation. According to the Committee, “If the data that is inputted into the AI application is temporary/ephemeral, but also relevant and responsive to the litigation, parties have the duty to preserve this electronically stored information.”

Finally, another important topic included in the report was the impact of GenAI-created deepfake evidence and its impact on trials. The Committee acknowledged the challenge presented by synthetic evidence, explaining that “(d)eciding issues of relevance, reliability, admissibility and authenticity may still not prevent deepfake evidence from being presented in court and to a jury.”

According to the Committee, the solution to this problem — and to all issues presented by the impact of GenAI on the practice of law — is education. Lawyers, law students, judges, and regulators must take steps to ensure they fully understand this technology and how existing laws and regulations apply to it. 

Importantly, the Committee emphasized that this new technology is no different than the ones that preceded it, and thus “(m)any of the risks posed by AI are more sophisticated versions of problems that already exist and are already addressed by court rules, professional conduct rules and other law and regulations. Furthermore, many risks are mitigated through understanding the technology and how AI will utilize data input into the AI system.” 

In other words, embracing technology, not fearing it, is the only path forward. Adapting and learning about these tools is essential for maintaining efficacy and relevance in a technology-first world. Of course, GenAI adoption must be done ethically, and the Committee provided lots of helpful guidance in that regard. We’ll learn more about that in my next column when we explore the ethical implications outlined in the report, so stay tuned.

Nicole Black is a Rochester, New York attorney, author, journalist, and the Head of SME and External Education at MyCase legal practice management software and LawPay payment processing, AffiniPay companies. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].

 

 

 




 


North Carolina Adds to Growing Body of AI Ethics Guidance for Lawyers

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

*****

North Carolina Adds to Growing Body of AI Ethics Guidance for Lawyers

As generative AI (GAI) technology proliferates and legal software companies focus on rolling this new functionality into their platforms, ethics committees across the country are recognizing and responding to the implementation challenges that lawyers face. GAI providers promise streamlined workflows and increased efficiencies, but with these benefits come concerns about ensuring accuracy in the results, adequate supervision, confidentiality preservation, and compliant billing processes. 

Because of the many hurdles faced when using GAI in its current state, several jurisdictions have issued ethics guidance over the past few months, which I’ve covered in earlier columns. The State Bar of California was first when it released guidance in November 2023. This was followed by Florida, which issued Ethics Opinion 24-1 on January 19th, and the New Jersey Supreme Court Committee on Artificial Intelligence and the Courts preliminary guidelines handed down on January 24th.

On January 19th, the North Carolina State Bar Council jumped into the fray with proposed guidance, 2023 Formal Ethics Opinion 4 (online: https://www.ncbar.gov/for-lawyers/ethics/proposed-opinions/), which is open for comments through March 30th.

This proposed opinion offers a thorough analysis of the many issues lawyers encounter when using GAI tools, along with commonsense and clear-cut guidance on the ins and outs of adopting GAI in an ethically compliant manner. 

Many of the Council’s conclusions mirror those reached by other ethics committees. For example, the Council concluded that lawyers may use GAI, but the duty of competence means they are responsible for “reviewing, evaluating, and ultimately relying upon the work produced by someone—or something—other than the lawyer,” which includes GAI output. Furthermore, the duty of technology competence requires lawyers to learn bout GAI so that they can responsibly “exercise independent professional judgment in determining how (or if)” using GAI is appropriate.

The Council opined that lawyers must carefully vet GAI providers to ensure confidential client information is protected, just as they are required to do when “providing confidential information to a third-party software program (practice management, cloud storage, etc.)” The Committee cautioned that when lawyers use consumer-grade GAI software, they should avoid “inputting client-specific information into publicly available AI resources” to prevent confidential data from being used to train the AI system.”

Importantly, the Committee clarified that when lawyers use GAI to help draft pleadings and adopt the output as their own, signing the pleading certifies their “good faith belief as to the factual and legal assertions therein,” a practice that necessarily applies to all pleadings submitted to the court, regardless of their origination source.

Client consent was also addressed. The Committee determined that when GAI is used for ordinary tasks like “conducting legal research or generic case/practice management,” client consent is unnecessary, whereas it would be required in advance for any substantive tasks that are “akin to outsourcing legal work to a nonlawyer.”

Finally, the Committee addressed legal billing issues and clarified that for hourly billing, lawyers may only bill clients for time actually spent using GAI and may not bill for the time saved through the use of this tool. However, the Committee suggested that due to the arguable reduction in billable hours that can be achieved through the use of GAI, lawyers might want to consider transitioning to flat fee billing “for the drafting of documents—even when using AI to assist in drafting—provided the flat fee charged is not clearly excessive and the client consents to the billing structure.”

As for expensing the cost of using a GAI tool, doing so is only permissible when the fee charged is “for actual expenses incurred when employing AI in the furtherance of a client’s legal services, provided the expenses charged are accurate, not clearly excessive, and the client consents to the charge, preferably in writing.” In comparison, charging a general administrative fee to clients to cover the cost of AI tools embedded in software generally used by the firm is unacceptable.

North Carolina's addition to the growing body of AI ethics guidance for lawyers highlights the important balance required to leverage AI's benefits while adhering to ethical standards. The conclusions in the opinion align with those of other jurisdictions, and emphasize the core principles of legal ethics remain unchanged even as technology advances at a rapid pace. As we continue to witness the integration of AI into various aspects of legal work, guidance like North Carolina's becomes invaluable for lawyers striving to maintain the highest standards of professionalism in the digital age.

Nicole Black is a Rochester, New York attorney, author, journalist, and the Head of SME and External Education at MyCase legal practice management software, an AffiniPay company. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].

 


New Jersey Preliminary AI Guidelines Released 

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

*****


New Jersey Preliminary AI Guidelines Released 

If you’re hesitant to test drive generative artificial intelligence technology (GAI), rest assured, you’re not alone. According to the results of the newly released LawPay and MyCase 2024 Legal Technology Report (online: https://www.lawpay.com/support/resources/reports/2024-legal-industry-report/), only 24% of law firms had implemented GAI tools as of September of 2023. Respondents cited many blockers to adoption including a lack of sufficient knowledge about GAI and how to use it (61%) and ethical concerns (53%). 

Fortunately, for those holding out due to lack of ethics guidance, help has arrived. Legal ethics committees across the country are responding to the demand for assistance and are rapidly handing down guidance. The State Bar of California’s Committee on Professional Responsibility and Conduct was the first to step up and release a thorough roadmap for ethical generative AI adoption in law firms in November 2023. This guidance was extensive and addressed many different issues including technology competence, confidentiality, and the requirement of candor about AI usage with legal clients and courts.

More recently, both Florida and New Jersey released guidance. Florida issued Ethics Opinion 24-1 on January 19th, which I covered in last week’s column. Then, the  New Jersey Supreme Court Committee on Artificial Intelligence and the Courts handed down preliminary guidelines on January 24th (online: https://njsba.com/wp-content/uploads/2024/01/Preliminary-Guidelines-on-the-Use-of-AI-by-NJ-Lawyers.pdf), discussed below.

At the outset, the committee cautioned that its guidelines were preliminary and did not address all issues triggered by GAI usage, including advertising and legal billing. The Committee explained that additional guidance may be issued as warranted as new concerns about GAI usage arise.

Next, the Committee acknowledged the inevitability of GAI use in the practice of law: “[t]he ongoing integration of AI into other technologies suggests that its use soon will be unavoidable, including for lawyers.” Like other jurisdictions, the Committee also emphasized the need for lawyers to ensure adequate oversight of all AI usage “by other lawyers and non-lawyer staff” due to the infancy of these tools.  

Confidentiality was also addressed, with the Committee highlighting the need to carefully vet GAI providers, and acknowledging the rapid growth in the number of legal-specific GAI tools now available to lawyers: “Today, the market is replete with an array of Al tools, including some specifically designed for lawyers, as well as others in development for use by law firms. A lawyer is responsible for ensuring the security of an Al system before entering any non-public client information.”

One notable conclusion reached was that, in contrast to the Florida committee’s conclusion to the contrary, the New Jersey committee determined that the rules “do not impose an affirmative obligation on lawyers to tell clients every time that they use AI,” but there are some situations that might require it.

Finally, the Committee reminded lawyers of the importance of maintaining technology competence in light of the rapid pace of GAI advancement: “In this complex and evolving landscape, lawyers must decide whether and to what extent AI can be used so as to maintain compliance with ethical standards without falling behind their colleagues.”

In conclusion, the recent release of preliminary AI guidelines by legal ethics committees in California, Florida, and New Jersey, provides valuable insight for lawyers seeking to adopt GAI into their firms. However, as highlighted by the Louisiana Supreme Court's recent letter (online: https://www.lsba.org/documents/News/LSBANews/LASCLetterAI.pdf), while this guidance is helpful, it may be unnecessary. According to the Supreme Court, the existing ethical and professional rules and opinions issued that govern the bench and the Bar are sufficient since “ the ethical and professional rules governing the bench and the Bar are robust and broad enough to cover the landscape of issues presented by AI in its current forms.”

In other words, even without specific ethics guidance, GAI adoption is no different than the adoption of other types of technology, so there’s nothing holding you back from learning about GAI and making educated, informed decisions about whether and how to use it in your law firm.

Nicole Black is a Rochester, New York attorney, author, journalist, and the Head of SME and External Education at MyCase legal practice management software, an AffiniPay company. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].

 





 


Florida Bar Hands Down Opinion on AI Ethics

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

*****

Florida Bar Hands Down Opinion on AI Ethics

We’re heading into the second year of generative artificial intelligence (GAI) availability, and if you haven’t been paying attention to this cutting-edge tool, there’s no better time than the present. GAI is advancing at an exponential rate and is rapidly being incorporated into the software tools you use every day. From legal research and contract drafting to law practice management and document editing, GAI is everywhere, and avoiding it is no longer an option. 

This is especially so now that legal ethics committees across the country are rising to the challenge and issuing GAI guidance. The first to do so was the State Bar of California’s Committee on Professional Responsibility and Conduct, which handed down a thorough roadmap for ethical generative AI adoption in law firms in November 2023. As I discussed in another article, the guidance provided was extensive and covered many different ethical issues including technology competence, confidentiality, and the requirement of candor, both with legal clients and courts.

More recently, both Florida and New Jersey released guidance, with Florida issuing Ethics Opinion 24-1 on January 19th, and New Jersey handing down preliminary guidelines on January 24th. Below I’ll break down the Florida opinion, and in my next article will address the New Jersey guidance.

In Florida Bar Ethics Opinion 24-1 (online: https://www.floridabar.org/etopinions/opinion-24-1/), the Board Review Committee on Professional Ethics concluded that lawyers may use GAI, but, of course, must do so ethically. The Committee addressed a broad range of ethics issues in the opinion, ranging from confidentiality and pricing to supervision and lawyer advertising.

At the outset the Committee explained that GAI tools can “create original images, analyze documents, and draft briefs based on written prompts,” but in doing so can hallucinate, which means providing “inaccurate answers that sound convincing.” As a result, the Committee cautioned that all output must be carefully reviewed for accuracy.

Next, the Committee reviewed GAI in the context of confidentiality and advised that lawyers who use GAI must have a thorough understanding of how the GAI technology handles data input and whether it uses input to train the GAI system. 

According to the Committee, a key way to do that is to vet GAI providers in much the same way as cloud computing providers. Take steps to ensure that: 1) the provider has an enforceable obligation to preserve data confidentiality and security, 2) the provider will notify the customer in the event of a breach or service of process requiring the production of client information, 3) Investigate the provider’s reputation, security measures, and policies, including any limitations on the provider’s liability; and 4) Determine whether the provider retains submitted information submitted before and after the discontinuation of services, or otherwise asserts proprietary rights to the information.

When it comes to client consent, the Committee noted that one way to mitigate confidentiality concerns would be to use in-house GAI tools like the legal-specific tools I referenced above, “where the use of a generative AI program does not involve the disclosure of confidential information to a third-party” in which case, “a lawyer is not required to obtain a client’s informed consent pursuant to Rule 4-1.6.” 

Next, the Committee reiterated the obligation to carefully review the output of any GAI tool, and that lawyers must ensure that all firm users are instructed to do this as well. The Committee cautioned that lawyers should not delegate work to a GAI tool that “constitute(s) the practice of law such as the negotiation of claims” nor should GAI be used to create a website intake chatbot that could inadvertently “provide legal advice, fail to immediately identify itself as a chatbot, or fail to include clear and reasonably understandable disclaimers limiting the lawyer’s obligations.”

Another key topic addressed was legal fees for GAI usage. The Committee explained that clients should be informed, preferably in writing, of the lawyer’s intent to charge a client the actual cost of using generative AI” and that lawyers can charge for the “reasonable time spent for case-specific research and drafting when using generative AI.”

Importantly, the Committee acknowledged that learning about GAI is part of a lawyer’s duty of technology competence, and explained that lawyers are obligated to develop competency in their use of new technologies like GAI, including their risks and benefits. However, clients cannot be charged for “time spent developing minimal competence in the use of generative AI.” In other words, you can’t charge clients for the cost of maintaining your ethical duty of technology competence.

The opinion also covers other ethics issues, so make sure to read it in its entirety. I don’t think certain requirements will withstand the test of time, such as the need to notify clients that you plan to use GAI. In the past, when ethics committees have required client notification as it relates to technology usage, that obligation has faded over time as the technology became commonplace. GAI will follow the same course, but it will happen much faster. 

That’s why these ethics opinions are so important: they provide lawyers with a roadmap for the ethical use of these tools. That being said, I don’t think these GAI opinions are technically necessary. Earlier opinions provide sufficient guidance for ethical technology adoption that can be easily applied to GAI. However, from a practical standpoint, these opinions serve a purpose: they provide a framework for moving forward and encourage lawyers to embrace GAI and the future that it brings. This means you have no excuse: there’s no better time than now to become familiar with GAI and its significant potential. 

Nicole Black is a Rochester, New York attorney, author, journalist, and the Head of SME and External Education at MyCase legal practice management software, an AffiniPay company. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


California Ethics Committee is First to Weigh in On AI

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

*****

California Ethics Committee is First to Weigh in On AI

Back in August, I discussed the imminent arrival of legal ethics opinions on generative AI tools. Since then, a small number of pioneering lawyers have experimented with this technology even without ethics guidance. For those who have been holding off until an ethics committee weighed in, your wait is over. 

As we head into the final month of 2023, you’ll be happy to know that ethics guidance focused on generative AI was published on November 16 by the State Bar of California’s Committee on Professional Responsibility and Conduct (COPRAC). 

It provides a thorough roadmap for generative AI adoption in law firms. 

At the outset, COPRAC explained that AI in its current state is simply a new kind of technology and does not warrant special treatment. As a result, “the existing Rules of Professional Conduct are robust, and the standards of conduct cover the landscape of issues presented by generative AI in its current forms. However, COPRAC recognizes that generative AI is a rapidly evolving technology that presents novel issues that might necessitate new regulation and rules in the future.”

The guidance provided by COPRAC was extensive, addressing many different ethical issues. Issues addressed included technology competence, confidentiality, and the requirement of candor, both with legal clients and courts. Below you’ll find some of the most notable takeaways.

The first topic tackled was the duty of confidentiality. According to COPRAC, lawyers “must not input any confidential information of the client into any generative AI solution that lacks adequate confidentiality and security protections.” The overarching obligation lawyers have in this regard is to fully vet AI providers and their products (or have an IT consultant do this on your firm’s behalf) so that you fully understand how confidential data will be handled and protected.

COPRAC also addressed technology competence, explaining that lawyers must learn, to a reasonable degree, “how the technology works, its limitations, and the applicable terms of use and other policies governing the use and exploitation of client data by the product.” In addition to ensuring an understanding of generative AI, the ethics rules also require that AI outputs be carefully reviewed for accuracy and bias.

Next up was the duty of supervision. COPRAC explained that supervisory and managerial attorneys must ensure that clear policies are in place that address permissible uses of AI. Those measures must provide “reasonable assurance that the firm’s lawyers and non lawyers’ conduct complies with their professional obligations when using generative AI.”

COPRAC further emphasized the importance of full candor with courts and clients. To that end, lawyers should carefully review all generative AI outputs for accuracy and correct any errors before submission to courts. The committee cautioned that “(o)verreliance on AI tools is inconsistent with the active practice of law and application of trained judgment by the lawyer…and AI-generated outputs can be used as a starting point but must be carefully scrutinized.”

Similarly, client communication obligations require that lawyers consider disclosing their intention to use generative AI to clients, “including how the technology will be used, and the benefits and risks of such use.” The committee also advised lawyers to be aware of any client directives that might conflict with the use of AI in their case.

Last but not least, COPRAC weighed on billing clients for AI-related work product, explaining that lawyers may charge for the time spent creating, refining, and reviewing generative AI outputs. Notably, the committee opined that charging for the time saved by using generative AI is impermissible. Finally, it determined that fee agreements “should explain the basis for all fees and costs, including those associated with the use of generative AI.”

With the issuance of this widely-anticipated guidance, nothing is holding you back from diving into the generative AI waters. With the recent release of legal generative AI products from LexisNexis and Thomson Reuters, and with many other legal AI products in the works, there’s no better time than now to take advantage of all that this technology offers. Rest assured, if you aren’t using it and reaping the time-saving benefits and efficiencies, your competitors undoubtedly will be.

Nicole Black is a Rochester, New York attorney, author, journalist, and the Head of SME and External Education at MyCase legal practice management software, an AffiniPay company. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


Legaltech Due Diligence: Evaluating Cloud and AI Software

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

*****

Legaltech Due Diligence: Evaluating Cloud and AI Software

Technology is evolving at a pace never before seen. While it can often feel overwhelming, there’s no better time to embrace change by incorporating emerging tools like cloud-based software and artificial intelligence (AI) into your law firm. Given the rapid pace of change, your best option is to do all you can to avoid falling behind.

Of course, whenever you consider implementing new cloud-based technology, including AI software, into your law firm, it’s essential to thoroughly understand the implications of technology adoption and fully vet all software providers that will handle your firm’s data. Your ethical obligation is to take reasonable steps to ensure that all confidential information will be properly protected and securely maintained.   

One of the first things you can do to ensure your firm’s data security is to use legal technology tools rather than consumer-grade software. Legal technology providers understand the needs and ethical obligations of legal professionals and are thus better equipped to meet your needs. But even when using legal software, you still required to understand how your firm’s data will be handled and protected. 

If you’re not sure where to start, I’ve got you covered. Below you’ll find a partial list of questions to ask cloud and AI companies. The comprehensive list of questions can be accessed online here: https://www.lawtechtalk.com/questions-to-ask-cloud-providers.html.

These questions will help you vet legal software providers, including 1) who will have access to it and under what conditions; 2) what steps will be taken to secure the data, 3) what types of data backup procedures are in place, 4) whether the accuracy and reliability of the output are sufficient for your needs, and 5) how you can export your data should you decide to switch providers.

Partial List of Questions to Ask AI Providers 

  • What is your AI’s core technology and architecture?
  • What data does your AI require for training?
  • How do you ensure your AI model’s accuracy?
  • How does your AI handle bias and fairness?
  • How is your AI model updated and improved over time?
  • What is your AI’s interpretability and transparency like?
  • What is your model’s performance in real-time applications?
  • Can your AI model be customized to our specific needs?
  • What kind of support and training do you provide?
  • How do you ensure confidentiality, data security, and privacy?

Partial List of Questions to Ask Cloud Providers

  • How long has the company been around?
  • What type of facility will host your law firm's data?
  • Who else has access to the cloud facility, the servers, and the data?
  • How does the vendor screen its employees?
  • Is the data accessible by the vendor’s employees limited to only those situations where you request assistance?
  • If there are integrations with the company's product, how does the company screen the security processes of the other vendors?
  • If there is a problem with a product that integrates with the vendor's software, which company will be responsible for addressing the issue?
  • Does the contract with the vendor address confidentiality?
  • How often are backups performed?
  • What types of encryption methods are used?

Keeping up with change isn’t always easy, but it’s essential in today’s fast-paced environment. Staying up-to-date and carefully vetting the companies that will provide technology solutions for your firm will lay the groundwork for future success. Your diligent efforts will pay off in the long run by enabling your law firm to thrive by leveraging technology that will level the playing field and allow you to compete in innovative ways never before imagined. 

Nicole Black is a Rochester, New York attorney, author, journalist, and the Head of SME and External Education at MyCase legal practice management software, an AffiniPay company. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


Sharing space, not secrets: Office sharing insights from ABA Formal Opinion 507

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

*****

Sharing space, not secrets: Office sharing insights from ABA Formal Opinion 507

The landscape of our lives looks very different now than it did before the pandemic struck. Nowhere is this more apparent than in the workplace. Remote work is more common than ever and increased technology usage has enabled more flexible and creative work arrangements. Because five-day in-office work weeks are less common, office-sharing arrangements have become more palatable for lawyers. Less office space is required due to hybrid work schedules, thus allowing more people to work from one office and divide rental costs while also sharing resources. 

However, with more office-sharing by attorneys comes the need to carefully balance the convenience with the potential risks this type of arrangement can pose. Fortunately, there is guidance available in the form of a recently released ethics opinion, ABA Formal Opinion 507.

Handed down in July, this opinion addresses the ethical issues that arise when lawyers participate in office-sharing arrangements. The Standing Committee on Ethics and Responsibility concluded that it is generally permissible for lawyers to share offices with others, but when doing so there are a number of ethical issues to keep top of mind.

First, the Committee cautioned that it’s essential to take steps to protect client confidentiality. Lawyers must ensure that the physical arrangement of the shared office space does not expose client information to other office-sharing lawyers or their staff. Safeguards that should be considered include maintaining separate waiting areas, installing privacy screens, and using technology to provide secure storage for client files 

The Committee discussed the importance of using separate telephone lines and computer systems, along with providing staff training to protect client information: "(L)awyers can also restrict access to client-related information by securing physical client files in locked cabinets or offices and using separate telephone lines and computer systems. Lawyers, however, may overcome confidentiality concerns with shared telephone and computer systems with appropriate security measures, staff training, and client disclosures." 

While keeping client information secure is paramount, it's not the only ethical obligation lawyers need to consider. Clear communication was also emphasized. According to the Committee, lawyers have an ethical obligation to clearly communicate the nature of their relationship to the public and clients to avoid misleading impressions. There are a number of ways that lawyers sharing office space can ensure compliance, including using separate business cards, letterheads, and directory listings.


The Committee also opined on the importance of taking steps to avoid conflicts of interest, explaining that attorneys “should pay particular attention to (1) avoiding the imputation of conflicts of interest, (2) taking on potential new matters that are adverse to clients represented by other office sharing lawyers, and (3) consulting with fellow office sharing lawyers.” 

Another area to approach with caution is when sharing staff with other lawyers. 

If lawyers decide to share support staff, they must instruct all employees regarding their confidentiality obligations and should take steps to supervise them appropriately. 

Finally, the Committee addressed issues that arise when lawyers who share office space consult with one another about their cases. According to the Commitee, lawyers should avoid disclosing client-identifying or privileged information during informal consultations, and discussing issues through hypotheticals is recommended. Notably, these consultations can sometimes lead to unexpected conflicts of interest that could limit a lawyer's ability to represent current or future clients. Therefore, a standard conflict check should be conducted before any informal consultation discussion in order to mitigate this risk.

The pandemic and technological changes have upended traditional legal practices. As a result, we now have an unprecedented array of options for how and where to conduct our work. But as this opinion reminds us, with this newfound flexibility comes a heightened ethical burden. In a landscape that's shifting almost as quickly as technology itself, this opinion provides much-needed guidance for lawyers seeking to take advantage of the many benefits offered by hybrid work arrangements like office-sharing.

Nicole Black is a Rochester, New York attorney, author, journalist, and the Head of SME and External Education at MyCase legal practice management software, an AffiniPay company. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].