ethics

Technology Competence in the Age of Artificial Intelligence

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

Technology Competence in the Age of Artificial Intelligence

With technology evolving so quickly, powered by the rapid development of generative artificial intelligence (AI)  tools, keeping pace with change becomes all the more critical. For lawyers, the ethical requirement of maintaining technology competence plays a large part in that endeavor. 

The duty of technology competence is relatively broad, and the obligations required by this ethical rule can sometimes be unclear, especially when applied to emerging technologies like AI. Rule 1.1 states that a “lawyer should provide competent representation to a client.” The comments to this rule clarify that to “maintain the requisite knowledge and skill, a lawyer should . . . keep abreast of the benefits and risks associated with technology the lawyer uses to provide services to clients or to store or transmit confidential information.”

With the proliferation of AI, this duty has become all the more relevant, especially as trusted legal software companies begin to incorporate this technology into the platforms that legal professionals use daily in their firms. Lawyers seeking to take advantage of the significant workflow efficiencies that AI offers must ensure that they’re doing so ethically. 

That’s easier said than done. In today’s fast-paced environment, what is required to meet that duty? Does it simply require that you understand the concept of AI? Do you have to understand how AI tools work? Is there a continuing obligation to track changes in AI as it advances? If you have no plans to use it, can you ignore it and avoid learning about it? 

Fortunately for New York lawyers, there are now two sets of ethics guidance available: the New York State Bar’s April 2024 Report and Recommendations from the Taskforce on Artificial Intelligence ,and more recently, Formal Opinion 2024-5, which was issued by the New York City Bar Association.

The New State Bar’s guidance on AI is overarching and general, particularly regarding technology competence. As the “AI and Generative AI Guidelines” provided in the Report explains, lawyers “have a duty to understand the benefits, risks and ethical implications associated with the Tools, including their use for communication, advertising, research, legal writing and investigation.”

While instructive, the advice is fairly general, and intentionally so. As the Committee explained, AI is no different than the technology that preceded it, and thus, “(m)any of the risks posed by AI are more sophisticated versions of problems that already exist and are already addressed by court rules, professional conduct rules and other law and regulations.” 

For lawyers seeking more concrete guidance on technology competence when adopting AI, look no further than the New York City Bar’s AI opinion. In it, the Ethics Committee offers significantly more granular insight into technology competence obligations.

First, lawyers must understand that current generative AI tools may include outdated information “that is false, inaccurate, or biased.” The Committee requires that lawyers understand not only what AI is but also how it works. 

Before choosing a tool, there are several recommended courses of action. First, you must “understand to a reasonable degree how the technology works, its limitations, and the applicable [T]erms of [U]se and other policies governing the use and exploitation of client data by the product.” Additionally, you may want to learn about AI by “acquiring skills through a continuing legal education course.” Finally, consider consulting with IT professionals or cybersecurity experts.” 

The Committee emphasized the importance of carefully reviewing all responses for accuracy explaining that generative AI outputs “may be used as a starting point but must be carefully scrutinized. They should be critically analyzed for accuracy and bias.” The duty of competence requires that lawyers ensure the original input is correct and that they must analyze the corresponding response “to ensure the content accurately reflects and supports the interests and priorities of the client in the matter at hand, including as part of advocacy for the client.”

The Committee further clarified that you cannot delegate your professional judgment to AI and that you “should take steps to avoid overreliance on Generative AI to such a degree that it hinders critical attorney analysis fostered by traditional research and writing.” This means that all AI output should be supplemented “with human-performed research and supplement any Generative AI-generated argument with critical, human-performed analysis and review of authorities.”

If you plan to dive into generative AI, both sets of guidance should provide a solid roadmap to help you navigate your technology competence duties. Understanding how AI tools function and their limitations are essential when using this technology. By staying informed and applying critical judgment to the results, you can ethically leverage AI’s many benefits to provide your clients with the most effective, efficient representation possible.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].



Legal Ethics in the AI Era: The NYC Bar Weighs In

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

Legal Ethics in the AI Era: The NYC Bar Weighs In

Since November 2022, when the release of ChatGPT was first announced, many jurisdictions have released AI guidance. In this column, I’ve covered the advice rendered by many state ethics committees, including California, Florida, New Jersey, Michigan, New York, Pennsylvania, Kentucky, the American Bar Association, and most recently, Virginia.

Now, the New York City Bar Association has entered the ring, issuing Formal Ethics Opinion 2024-5 on August 7th. The New York City Bar Association Committee of Professional Ethics mirrored the California Bar’s approach and provided general guidelines in a chart format rather than proscriptive requirements. The Committee explained that “when addressing developing areas, lawyers need guardrails and not hard-and-fast restrictions or new rules that could stymie developments.” Instead, the goal was to provide assistance to New York attorneys through “advice specifically based on New York Rules and practice…”

Regarding confidentiality, the Committee distinguished between “closed systems” consisting of a firm’s “own protected databases,” like those typically provided by legal technology companies, and systems like ChatGPT that share inputted information with third parties or use it for their own purposes. Client consent is required for the latter, and even with “closed systems,” confidentiality protections within the firm must be maintained. The Committee cautioned that the terms of use for a generative AI tool should be reviewed regularly to ensure that the technology vendor is not using inputted information to train or improve its product in the absence of informed client consent.

Turning to the duty of technology competence, the Committee opined that when choosing a product, lawyers “should understand to a reasonable degree how the technology works, its limitations, and the applicable [T]erms of [U]se and other policies governing the use and exploitation of client data by the product.” Also emphasized was the need to avoid delegating professional judgment to these tools and to consider generative AI outputs to be a starting point. Not only must lawyers ensure that the output is accurate, but they should also take steps to “ensure the content accurately reflects and supports the interests and priorities of the client in the matter at hand.”

The duty of supervision was likewise addressed, with the Committee confirming that firms should have policies and training in place for lawyers and other employees in the firm regarding the permissible use of this technology, including ethical and practical uses, along with potential pitfalls. The Committee also advised that any client intake chatbots used by lawyers on their websites or elsewhere on behalf of the firm should be adequately supervised to avoid “the risk that a prospective client relationship or a lawyer-client relationship could be created.”

Not surprisingly, the Committee required lawyers to be aware of and comply with any court orders regarding AI use. Another court-related issue addressed was AI-created deepfakes and their impact on the judicial process. According to the Committee’s guidance, lawyers must screen all client-submitted evidence to assess whether it was generated by AI, and if there is a suspicion “that a client may have provided the lawyer with Generative AI-generated evidence, a lawyer may have a duty to inquire.”

Finally, the Committee turned to billing issues, agreeing with other jurisdictions that lawyers may charge for time spent crafting inquiries and reviewing output. Additionally, the Committee explained that firms may not bill clients for time saved as a result of AI usage and that firms may want to explore alternative fee arrangements in order to stay competitive since AI may significantly impact legal pricing moving forward. Last but not least, any generative AI costs should be disclosed to clients, and any costs charged to clients “​​should be consistent with ethical guidance on disbursements and should comply with applicable law.” 

The summary above simply provides an overview of the guidance provided. For a more nuanced perspective, you should read the opinion in its entirety. Whether you’re a New York lawyer or practice elsewhere, this guidance is worth reviewing and provides a helpful roadmap for adoption as we head into an AI-led future where technology competence is no longer an option. Instead, it is an essential requirement for the effective and responsible practice of law.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


Practical and Adaptable AI Guidance Arrives From the Virginia State Bar 

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

Practical and Adaptable AI Guidance Arrives From the Virginia State Bar 

If you're concerned about the ethical issues surrounding artificial intelligence (AI) tools, the good news is that there's no shortage of guidance. A wealth of resources, guidelines, and recommendations are now available to help you navigate these concerns. 

Traditionally, bar associations have taken years to analyze the ethical implications of new and emerging technologies. However, generative AI has reversed this trend. Ethics guidance has emerged far more quickly, which is a very welcome change from the norm.

Since the general release of the first version of ChatGPT in November 2022, ethics committees have stepped up to the plate and offered much-needed AI guidance to lawyers at a remarkably rapid clip. Jurisdictions that have weighed in include California, Florida, New Jersey, Michigan, New York, Pennsylvania, Kentucky, and the American Bar Association. 

Recently, Virginia entered the AI ethics discussion with a notably concise approach. Unlike the often lengthy and detailed analyses from other jurisdictions, the Virginia State Bar issued a streamlined set of guidelines, available as an update on its website (accessible online at the bottom of the page: https://vsb.org/Site/Site/lawyers/ethics.aspx). This approach stands out not only for its brevity but also for its focus on providing practical, overarching advice. By avoiding the intricacies of specific AI tools or interfaces, the Virginia State Bar has ensured that its guidance remains flexible and relevant, even as the technology rapidly evolves.

Importantly, the Bar acknowledged that regardless of the type of technology at issue, lawyers’ ethical obligations remain the same: “(A) lawyer’s basic ethical responsibilities have not changed, and many ethics issues involving generative AI are fundamentally similar to issues lawyers face when working with other technology or other people (both lawyers and nonlawyers).”

Next, the Bar examined confidentiality obligations, opining that just as lawyers must review data-handling policies relating to other types of technology, so, too, must they vet the methods used by AI providers when handling confidential client information. The Bar explained that while legal-specific providers can often promise better data security, there is still an obligation to ensure a full understanding of their data management approach: “Legal-specific products or internally-developed products that are not used or accessed by anyone outside of the firm may provide protection for confidential information, but lawyers must make reasonable efforts to assess that security and evaluate whether and under what circumstances confidential information will be protected from disclosure to third parties.”

One area where the Bar’s approach conformed to that of most jurisdictions was client consent. While the ABA suggested explicit client consent when using AI was required in many cases, the Bar agreed with most other ethics committees, concluding that there “is no per se requirement to inform a client about the use of generative AI in their matter” unless there are extenuating circumstances like an agreement with the client or increased risks like those encountered when using consumer-facing products.

The Bar also considered supervisory requirements, emphasizing the importance of reviewing all output just as you would regardless of the source. According to the Bar, as “with any legal research or drafting done by software or by a nonlawyer assistant, a lawyer has a duty to review the work done and verify that any citations are accurate (and real)” and that that duty of supervision “extends to generative AI use by others in a law firm.”

Next, the Bar provided insight into the impact of AI usage on legal fees. The Bar agreed that lawyers cannot charge clients for the time saved as a result of using AI: “A lawyer may not charge an hourly fee in excess of the time actually spent on the case and may not bill for time saved by using generative AI. The lawyer may bill for actual time spent using generative AI in a client’s matter or may wish to consider alternative fee arrangements to account for the value generated by the use of generative AI.”

On the issue of passing the costs of AI software on to clients, the Bar concluded that doing so was not permissible unless the fee is both reasonable and “permitted by the fee agreement.”

Finally, the bar focused on a handful of recent court rules issued that forbid the use of AI for document preparation, highlighting the importance of being aware of and complying with all court disclosure requirements regarding AI usage.

The Virginia State Bar’s flexible and practical AI ethics guidance offers a valuable framework for lawyers as they adjust to the ever-changing generative AI landscape. By focusing on overarching principles, this thoughtful approach ensures adaptability as technology evolves. For those seeking reliable guidance, Virginia’s model offers a useful roadmap for remaining ethically grounded amid unprecedented technological advancements.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


The ABA Weighs in on the Ethical Use Of AI

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

The ABA Weighs in on the Ethical Use Of AI

Generative artificial intelligence (GenAI) is advancing at exponential rates. Since the release of GPT-4 less than two years ago, there has been an explosion of GenAI tools designed for legal professionals. With the rapid proliferation of software incorporating this technology comes increased concerns about ethical and secure implementation. 

Ethics committees across the country have stepped up to the plate to offer guidance to assist lawyers seeking to adopt GenAI into their firms. Most recently, the American Bar Association weighed in, handing down Formal Opinion 512 at the end of July. 

In its opinion, the ABA Standing Committee on Ethics and Professional Responsibility acknowledged the significant productivity gains that GenAI can offer legal professionals, explaining that GenAI “tools offer lawyers the potential to increase the efficiency and quality of their legal services to clients…Lawyers must recognize inherent risks, however." 

Importantly, the Committee also cautioned that when using these tools, lawyers “lawyers may not abdicate their responsibilities by relying solely on a GAI tool to perform tasks that call for the exercise of professional judgment.” In other words, while GenAI can significantly increase efficiencies, lawyers should not rely on it at the expense of their personal judgment.

Next, the Committee addressed the key ethical issues presented when lawyers incorporate GenAI tools into their workflows. First and foremost, technology competency was emphasized. According to the Committee, lawyers must stay updated on the evolving nature of GenAI technologies and have a reasonable understanding of the technology’s benefits, risks, and limitations.

Confidentiality obligations were also discussed, and the Committee highlighted the need to ensure that GenAI does not inadvertently expose client data and that systems should not be allowed to train on confidential data. Notably, the Committee required lawyers to obtain informed client consent before using these tools in ways that could impact client confidentiality, especially when using consumer-facing tools that train on inputted data.

The Committee also provided guidance on supervision requirements, advising that lawyers in managerial roles must ensure compliance with their firms’ established GenAI policies. The supervisory duty includes implementing policies, training personnel, and supervising the use of AI to prevent ethical violations.

The Committee highlighted the importance of reviewing all GenAI output to ensure its accuracy: “(D)uties to the tribunal likewise require lawyers, before submitting materials to a court, to review these outputs, including analysis and citations to authority, and to correct errors, including misstatements of law and fact, a failure to include controlling legal authority, and misleading arguments.”

Finally, the Committee offered insight into the ethics of legal fees charged when using GenAI to address client matters. The Committee explained that lawyers may charge fees encompassing the time spent reviewing AI-generated outputs but may not charge clients for time spent learning to use GenAI software. Importantly, it is impermissible for lawyers to invoice clients for time that would have been spent on work but for the efficiencies gained from using GenAI tools. In other words, clients can only be billed for the work completed, not for time saved due to GenAI.

Each new ethics opinion, like ABA Formal Opinion 512, offers much-needed guidance that enables lawyers to integrate AI tools into their firms thoughtfully and responsibly. By addressing emerging concerns and providing clear standards, these opinions reduce uncertainty and pave the way for forward-thinking lawyers to adopt GenAI confidently. While the ABA’s opinion is only advisory, it represents a positive trend of responsive guidance that arms the legal profession with the information needed to innovate ethically and adopt emerging technologies in today’s ever-changing AI era.

Nicole Black is a Rochester, New York attorney, author, journalist, and the Principal Legal Insights Strategist at MyCase, LawPay, CASEpeer, and Docketwise, AffiniPay companies. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


AI’s Role in Modern Law Practice Explored by Texas and Minnesota Bars

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

AI’s Role in Modern Law Practice Explored by Texas and Minnesota Bars

If you’re not yet convinced that artificial intelligence (AI) will change the practice of law, then you’re not paying attention. If nothing else, the sheer number of state bar ethics opinions and reports focused on AI released within the past two years should be a clear indication that AI’s effects on our profession will be profound.

Just this month, the Texas and Minnesota bar associations stepped into the fray, each issuing reports that studied the issues presented when legal professionals use AI. 

First, there was the Texas Taskforce for Responsible AI in the Law’s “Interim Report to the State Bar of Texas Board of Directors,” which addressed the benefits and risks of AI, along with recommendations for the ethical adoption of these tools.

The Minnesota State Bar Association (MSBA) Assembly’s report, “Implications of Large Language Models (LLMs) on the Unauthorized Practice of Law (UPL) and Access to Justice,” assessed broader issues related to how AI could potentially impact the provision of legal services within our communities. 

Despite the divergence in focus, the reports covered a significant overlap of topics. For example, both reports emphasized the ethical use of AI and the importance of ensuring AI increases rather than reduces access to justice.

However, approaches to both issues differed. While the Texas Taskforce sought to develop guidelines for ethical AI use, the MSBA report suggested that there was no need to reinvent the wheel and that existing ethical guidance issued by other jurisdictions about AI tools like LLMs was likely sufficient to assist Minnesota legal professionals in navigating AI adoption.

There was also a joint focus on access to justice. Both reports included an emphasis on the value of ensuring that AI tools enhance access to justice. The Texas Taskforce highlighted the need to support legal aid providers in obtaining access to AI. At the same time, the MSBA’s Assembly recommended the creation of an “Access to Justice Legal Sandbox” that “would provide a controlled environment for organizations to use LLMs in innovative ways, without the fear of UPL prosecution.”

Overall, the MSBA Assembly’s approach was more exploratory, while the Texas Taskforce’s was more advisory. The MSBA Assembly’s report included recommendations to take more detailed, actionable steps like creating an AI regulatory sandbox, launching pilot projects, and creating a Standing Committee to consider recommendations made in the report.  In comparison, the Texas Taskforce identified broader goals such as raising awareness of cybersecurity issues surrounding AI, emphasizing the importance of AI education and CLEs, and proposing AI implementation best practices.

The issuance of these reports on the tails of other bar association guidance represents a significant step forward for the legal profession. While we’ve historically resisted change, we’re now looking forward rather than backward. Bar associations are rising to the challenge during this period of rapid technological advancements and providing lawyers with much-needed, practical guidance and advice designed to help them navigate the ever-changing AI landscape.  

While Texas focuses on comprehensive guidelines and educational initiatives, Minnesota’s approach includes regulatory sandboxes and pilot projects. These differing strategies reflect a shared commitment to ensuring AI enhances access to justice and improves the lives of legal professionals. Together, these efforts indicate a profession that is, at long last, willing to adapt and innovate by leveraging emerging technologies to better serve society and uphold justice in an increasingly digital-first age.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase legal practice management software and LawPay payment processing, AffiniPay companies. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].

 


Balancing Innovation and Ethics: Kentucky Bar Association’s Preliminary Stance on AI for Lawyers

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

Balancing Innovation and Ethics: Kentucky Bar Association’s Preliminary Stance on AI for Lawyers

The rapid advancement of generative artificial intelligence (AI) technology has had many effects, one of which has been to spur bar association ethics committees into action. In less than two years, at least eight jurisdictions have issued AI guidance in one form or another, including California, Florida, New Jersey, Michigan, and New York, which I’ve covered in this column. 

Most recently, I discussed a joint opinion from the Pennsylvania Bar Association Committee on Legal Ethics and Responsibility and the Philadelphia Bar Association Professional Guidance Committee, Joint Formal Opinion 2024-200, and promised to subsequently tackle Kentucky’s the Kentucky Bar Association’s March opinion, Ethics Opinion KBA E-457, which I’ll cover today. 

This opinion was issued in March and was published to the KBA membership in the May/June edition of the Bench & Bar Magazine. After the 30-day public comment period expires, it will become final.

This opinion covers a wide range of issues, including technology competency, confidentiality, client billing, notification to courts and clients regarding AI usage, and the supervision of others in the firm who use AI. 

Notably, when providing the necessary context for the guidance provided, the Committee wisely acknowledged that hard and fast rules regarding AI adoption by law firms are inadvisable since the technology is advancing rapidly, and every law firm will use it in different, unique ways: “The Committee does not intend to specify what AI policy an attorney should follow because it is the responsibility of each attorney to best determine how AI will be used within their law firm and then to establish an AI policy that addresses the benefits and risks associated with AI products. The fact is that the speed of change in this area means that any specific recommendation will likely be obsolete from the moment of publication.”

Accordingly, the Committee’s advice was fairly elastic and designed to change with the times as AI technology improves. The Committee emphasized the importance of maintaining technology competency, which includes staying “abreast of the use of AI in the practice of law,” along with the corresponding duties to continually take steps to maintain client confidentiality and to carefully “review court rules and procedures as they relate to the use of AI, and to review all submissions to the Court that utilized Generative AI to confirm the accuracy of the content of those filings.”

As other bar associations have done, the Kentucky Bar Ethics Committee also highlighted the issues surrounding client communication and billing when using AI to streamline legal work. 

Departing from the hard and fast requirement that some bars have put in place regarding notifying clients whenever AI is used in their matter, the Committee took the more moderate approach. It required that lawyers do so only under certain circumstances. The Committee explained that there is no “ethical duty to disclose the rote use of AI generated research for a client's matter unless the work is being outsourced to a third party; the client is being charged for the cost of AI; and/or the disclosure of AI generated research is required by Court Rules.” 

Next, the Committee determined that when invoicing a client for work performed more efficiently when using AI, lawyers should “consider reducing the amount of attorney's fees being charged the client when appropriate under the circumstances.” Similarly, lawyers may pass on expenses related to AI software if there is an acknowledgment in writing whereby the client agrees in advance to reimburse the attorney for the attorney's expense in using AI.” However, the Committee cautioned that the “costs of AI training and keeping abreast of AI developments should not be charged to clients.”

Finally, the Committee confirmed that lawyers who are partners or managers have a duty to ensure the ethical use of AI by other lawyers and employees, which involves appropriate training and supervision.

This opinion provides a thorough analysis of the issues and sound advice regarding AI usage in law firms. I’ve only hit the high points, so make sure to read the entire opinion for the Committee’s more nuanced perspective, especially if you are a Kentucky attorney. AI is here to stay and will inevitably impact your practice, likely much sooner than you might expect, given the rapid change we’re now experiencing. Invest time into learning about this technology now, so you can adapt to the times and incorporate it into your law firm, ultimately providing your clients with more efficient and effective representation.

Nicole Black is a Rochester, New York attorney, author, journalist, and the Head of SME and External Education at MyCase legal practice management software and LawPaypayment processing, AffiniPay companies. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


More AI Ethics Guidance Arrives With Pennsylvania Weighing In

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

More AI Ethics Guidance Arrives With Pennsylvania Weighing In

The rate of technological change this year has been off the charts. Lately, there’s daily news of generative artificial intelligence (AI) announcements about new products, feature releases, or acquisitions. Advancement has been occurring at such a rapid clip that it’s more challenging than ever to keep up with the pace of change — blink, and you’ll miss it!

Given how quickly AI has infiltrated our lives and profession, it’s been all the more impressive to watch bar association professional disciplinary committees step up to the plate and issue timely, much-needed guidance. Even though generative AI has been around for less than two years, California, Florida, New Jersey, Michigan, and New York had already issued GenAI guidance for lawyers as of April 2024.

Just a few months later, two other states, Pennsylvania and Kentucky, have weighed in, providing lawyers in their jurisdictions with roadmaps for ethical AI usage. Today, I’ll discuss the Pennsylvania guidance and will cover Kentucky’s in my next article.

On May 22, the Pennsylvania Bar Association Committee on Legal Ethics and Responsibility and the Philadelphia Bar Association Professional Guidance Committee issued Joint Formal Opinion 2024-200. In the introduction to the opinion, the joint Committee explained why it is critical for lawyers to learn about AI: “This technology has begun to revolutionize the way legal work is done, allowing lawyers to focus on more complex tasks and provide better service to their clients…Now that it is here, attorneys need to know what it is and how (and if) to use it.” A key way to meet that requirement is to take advantage of “continuing education and training to stay informed about ethical issues and best practices for using AI in legal practice.”

The joint Committee emphasized the importance of understanding both the risks and benefits of incorporating AI into your firm’s workflows. It also stated that if used appropriately and “with appropriate safeguards, lawyers can utilize artificial intelligence” in a compliant manner. 

The opinion included many recommendations and requirements for lawyers planning to use AI in their practices. First and foremost, the Committees emphasized basic competence and the need to “ensure that AI-generated content is truthful, accurate, and based on sound legal reasoning.” This obligation requires lawyers to confirm “the accuracy and relevance of the citations they use in legal documents or arguments.” 

Another area of focus was on protecting client confidentiality. The joint Committee opined that lawyers must take steps to vet technology providers with the end goal being to “safeguard information relating to the representation of a client and ensure that AI systems handling confidential data adhere to strict confidentiality measures.”

Notably, the joint Committee highlighted the importance of ensuring that AI tools and their output are unbiased and accurate. This means that when researching a product and provider, steps must be taken to “ensure that the data used to train AI models is accurate, unbiased, and ethically sourced to prevent perpetuating biases or inaccuracies in AI-generated content.”

Transparency with clients was also discussed. Lawyers were cautioned to ensure clear communication “with clients about their use of AI technologies in their practices…(including) how such tools are employed and their potential impact on case outcomes.” Lawyers were also advised to clearly communicate with clients about AI-related expenses, which should be “reasonable and appropriately disclosed to clients.”

This guidance — emphasizing competence, confidentiality, and transparency —is a valuable resource for lawyers seeking to integrate AI into their practices. This timely advice helps ensure ethical AI usage in law firms, especially for Pennsylvania practitioners. For even more helpful ethics analysis, stay tuned for my next article, where we’ll examine Kentucky's recent AI guidance.

Nicole Black is a Rochester, New York attorney, author, journalist, and the Head of SME and External Education at MyCase legal practice management software and LawPaypayment processing, AffiniPay companies. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].

 

 




The GenAI Courtroom Conundrum

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

The GenAI Courtroom Conundrum

In the wake of ChatGPT-4's release a year ago, there has been a notable uptick in the use of generative artificial intelligence (AI) tools by lawyers when drafting court filings. However, with this embrace of cutting-edge technology there has been an increase in well-publicized incidents involving fabricated case citations.

Here is but a sampling of incidents from earlier this year:

  • A Massachusetts lawyer faced sanctions for submitting multiple memoranda riddled with false case citations (2/12/24).
  • A British Columbia attorney was reprimanded and ordered to pay opposing counsel's costs after relying on AI-generated "hallucinations" (2/20/24).
  • A Florida lawyer was suspended by the U.S. District Court for the Middle District of Florida for filing submissions based on fictitious precedents (3/8/24).
  • A pro se litigant's case was dismissed after the court called them out for submitting false citations for the second time (3/21/24).
  • The 9th Circuit summarily dismissed a case without addressing the merits due to the attorney's reliance on fabricated cases (3/22/24).

Judges have been understandably unhappy with this trend, and courts across the nation have issued a patchwork of orders, guidelines, and rules regulating the use of generative AI in their courtrooms. According to data collected by RAILS (Responsible AI in Legal Services) in March, there were 58 different directives on record at that time.

This haphazard manner of addressing AI usage in courts is problematic. First, it fails to provide the much-needed consistency and clarity. Second, it evinces a lack of understanding about the extent to which AI has been embedded within many technology products used by legal professionals for many years now — in ways that are not always entirely transparent to the end user.

Fortunately, as this technology has become more commonplace and is better understood, some of our judicial counterparts have recently begun to revise their perspective, offering newfound approaches to AI-supported litigation filings. They have wisely decided that rather than over-regulating the technology, our profession would be better served if there was reinforcement of existing rules that require due diligence and careful review of all court submissions, regardless of the tools employed.

For example, earlier this month, the Fifth Circuit U.S. Court of Appeals in New Orleans reversed course and chose not to adopt a proposed rule that would have required lawyers to certify that if an AI tool was used to assist in drafting a filing, all citations and legal analysis had been reviewed for accuracy. Lawyers who violated this rule could have faced sanctions and the risk that their filings would be stricken from the record. In lieu of adopting the rule, the Court advised lawyers to ensure “that their filings with the court, including briefs…(are) carefully checked for truthfulness and accuracy as the rules already require.”

In another case, a judge for the Eleventh Circuit U.S. Court of Appeals used ChatGPT and other generative AI tools to assist in writing his concurrence in Snell v. United Specialty Insurance Company, No. 22-12581. In his concurrence he explained that he used the tools to aid in his understanding of what the term “landscaping” meant within the context of the case. The court was tasked with determining whether the installation of an in-ground trampoline constituted “landscaping” as defined by the insurance policy applicable to a negligence claim. Ultimately he found that the AI chatbots did, indeed, provide the necessary insight, and referenced this fact and in the opinion.

In other words, the time they are a’changin’. The rise of generative AI in legal practice has brought with it significant challenges, but reassuringly, the legal community is adapting.  The focus is beginning to shift from restrictive regulations towards reinforcing existing ethical standards, including technology competence and due diligence. Recent developments suggest a balanced approach is emerging—acknowledging AI's potential while emphasizing lawyers' responsibility for accuracy. This path forward strikes the right balance between technological progress and professional integrity, and my hope is that more of our esteemed  jurists choose this path.

Nicole Black is a Rochester, New York attorney, author, journalist, and the Head of SME and External Education at MyCase legal practice management software and LawPaypayment processing, AffiniPay companies. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].

 

 

 




New York On the Ethics of Expensing Credit Card Processing Fees to Clients

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

New York On the Ethics of Expensing Credit Card Processing Fees to Clients

One of the key business challenges lawyers face is getting paid. When cash or check were the only choices, there was little payment flexibility available for law firms or their clients. Today, things have changed. Most billing and law practice management software programs have built-in features that streamline the billing process and allow law firms to offer payment convenience in ways never before possible. From payment plans to credit cards and even “Pay Later” legal fee loan options, lawyers and their clients have more options than ever.

Regardless of the payment method, lawyers must comply with the ethics requirements surrounding legal fees. As new payment methods become available, ethics committees often weigh in to ensure that lawyers have sufficient guidance when accepting alternative payment methods. 

One area that has received considerable attention from regulators over the years is credit card payments, which are now commonly accepted in most law firms. Despite their widespread application, novel ethics issues surrounding credit card payments occasionally arise, which require input, such as the recent issue addressed by the New York State Bar Association Committee on Professional Ethics in Ethics Opinion 1258A.

At issue was whether a lawyer may pass on merchant processing fees to clients as an expense. At the outset, the Committee acknowledged that accepting credit cards as payment has long been permissible in New York provided that 1) the legal fee is reasonable, 2) client confidentiality is protected, 3) the credit card company’s actions do not impact client representation, 4) the client is advised before the charge is incurred and has the chance to dispute any billing errors, and 5) any disputes regarding the legal fee are handled according to the fee dispute resolution program outlined in 22 N.Y.C.R.R. Part 137.

Next, the Committee turned to the issue of expensing credit card fees to clients, explaining that excessive fees or expenses are prohibited by Rule 1.5(a) of the New York Rules of Professional Conduct (Rules). 

According to the Committee, this prohibition applies to a merchant processing fee since it is considered an “expense” under the Rules. As long as lawyers avoid charging excessive fees as defined in Rule 1.5(a), it is permissible to pass on merchant processing fees incurred when legal fees are paid by credit card to clients as expenses.  

Next, the Committee turned to Ethics Opinion 1050 from 2015, which addressed credit card payments made in the context of advance retainers. In that opinion, the Committee permitted the inquiring lawyer to, “as an administrative convenience, charge a client a nominal amount over the actual processing fees imposed on the lawyer by a credit card company in connection with the client’s payment by credit card of the lawyer’s advance payment retainer.”  

Doing so was conditioned upon 1) notifying the client and obtaining consent and 2) ensuring the additional fee was nominal and the total amount of the advance payment retainer, the processing fees, and the convenience fee were likewise reasonable under the circumstances.

The Committee then turned to the case at hand and applied the same principles, concluding that when legal fees beyond the initial retainer are paid by credit card, a “lawyer may pass on a merchant processing fee to clients who pay for legal services by credit card provided that both the amount of the legal fee and the amount of the processing fee are reasonable, and provided that the lawyer has explained to the client and obtained client consent to the additional charge in advance.”

In 2024, lawyers have unprecedented flexibility in payment methods. However, a thorough understanding of your ethical obligations is essential, especially when your firm broadens client payment options. This opinion is an important reminder to carefully navigate ethics rules when accepting credit card payments from clients, especially as technology continues to evolve and impact how law firms do business. 

Nicole Black is a Rochester, New York attorney, author, journalist, and the Head of SME and External Education at MyCase legal practice management software and LawPaypayment processing, AffiniPay companies. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].

 


ABA Weighs in on Listserv Ethics

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

ABA Weighs in on Listserv Ethics

At first glance, you might assume that this article was published in the early 1990s and was reprinted by mistake. If so, you’d be wrong. The truth is, the American Bar Association (ABA), in its infinite wisdom, decided that May 2024—in the midst of the generative AI technology revolution—was the ideal time to address the ethical issues presented when lawyers interact on listservs, an email technology that has existed since 1986.

So hold on to your hats, early technology adopters, while we break this opinion down so that you have the ethics guidance needed to appropriately interact when using technology that has been around longer than the World Wide Web.

In Formal Opinion 511, the ABA considered whether lawyers interacting on listservs who sought advice regarding a client matter was “impliedly authorized to disclose information relating to the representation of a client or information that could lead to the discovery of such information.”

At the outset, the ABA Standing Committee on Ethics and Professional Responsibility explained that the duty of confidentiality prohibits lawyers from disclosing any information related to a client’s representation, no matter the source. Protected information is not limited to “communications protected by attorney-client privilege” and includes clients’ identities and even publicly available information like transcripts of proceedings.

Next, the Committee acknowledged that generally speaking, lawyers are permitted to consult with an attorney outside of their firm regarding a matter and may reveal information related to the representation in the absence of client consent, but only if the “information is anonymized or presented as a hypothetical and the information is revealed under circumstances in which…the information will not be further disclosed or otherwise used against the consulting lawyer’s client.” In addition, the information shared may not be privileged and must be non-prejudicial.

However, according to the Committee, this implied authority to disclose anonymized or hypothetical case-related information to other attorneys is limited to professional consultations with other lawyers. This is because “participation in most lawyer listserv discussion groups is significantly different from seeking out an individual lawyer or personally selected group of lawyers practicing in other firms for a consultation about a matter.”

The Committee noted that listservs can consist of unknown participants, and posts can be forwarded or otherwise shared and viewed by non-participants, including other lawyers representing a party in the same matter. As a result, “posting to a listserv creates greater risks than the lawyer-to-lawyer consultation.”

Given the risks, in the absence of client consent, lawyers are ethically prohibited from posting anything to a listserv that could reasonably be linked to an identifiable client, whether the intent is to obtain assistance in a case or otherwise engage on the listserv. 

Listserv use is not forbidden, however, and lawyers can interact in other ways. For example, asking more general questions, sharing news updates, requesting access to a case, or seeking a form or document template.

Finally, the Committee expanded the opinion’s rationale to other types of interactions. The Committee opined that the “principles set forth in this opinion…apply equally when lawyers communicate about their law practices with individuals outside their law firms by other media and in other settings, including when lawyers discuss their work at in-person gatherings.”

That single line, easily missed at the beginning of the opinion, ensures that the Committee’s conclusions stand the test of time. 

This opinion on listserv ethics is a necessary reminder of the importance of confidentiality in all lawyer interactions, even when using long-established technologies like listservs. While the ABA’s timing could have been better, this advisory opinion is worth a thorough read. Take a look and then keep the Committee’s advisements in mind as you interact with other lawyers online and off. 

Nicole Black is a Rochester, New York attorney, author, journalist, and the Head of SME and External Education at MyCase legal practice management software and LawPaypayment processing, AffiniPay companies. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].