New York Surrogates Court on Admissibility of AI Evidence 

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

New York Surrogates Court on Admissibility of AI Evidence 

The last few decades have seen rapid technological advances. For busy lawyers, keeping up with the pace of change has been a challenging endeavor. For many, the inclination has been to ignore the latest advancements in favor of maintaining the status quo.

Unfortunately, that approach has proven ineffective. 21st-century technologies have infiltrated all aspects of our lives, from how we communicate, make purchases, and obtain information to how we conduct business. Turning a blind eye is no longer an option. Instead, it is necessary to prioritize learning about emerging technologies, including their potential implications for your law practice, your clients’ cases, and your law license. 

This enlightened approach is essential as we enter the artificial intelligence (AI) era. Like the technologies that preceded it, AI will inevitably impact many aspects of your law practice, even if you choose not to incorporate it into your firm’s daily workflows.

For example, just as social media evidence has altered the course of trials, so too has artificial intelligence. A case in point is Saratoga County Surrogate’s Court Judge Schopf's October 10th Court Order in Matter of Weber (2024 NY Slip Op 24258). One issue under consideration in this case was the use of generative AI-produced evidence at a hearing.

In Matter of Weber, the Petitioner filed a Petition for Judicial Settlement of the Interim Account of the Trustee. The Objectant responded by filing objections to the Trust Account alleging, in relevant part, that the Petitioner had breached her fiduciary duty as Trustee. A hearing was held to address the Objectant’s allegations. 

This opinion followed. In it, the Court considered whether the Objectant had overcome the prima facie accuracy of the Interim Account and proved his objections. One issue addressed was whether and under what circumstances AI-generated output is admissible into evidence.

The hearing testimony revealed that the Objectant's expert witness, Charles Ranson, used Microsoft’s generative AI tool, Copilot, to cross-check his damage calculations. The evidence showed that Ranson could not provide the input or prompt used, nor could he advise regarding the sources relied upon and the process used by the chatbot to create the output provided. 

When determining the admissibility of Copilot’s responses, Judge Schopf explained that the “mere fact that artificial intelligence has played a role, which continues to expand in our everyday lives, does not make the results generated by artificial intelligence admissible in Court.”

The Court concluded that the reliability of AI-generated responses must be established before they are admitted into evidence. The Court explained that “due to the nature of the rapid evolution of artificial intelligence and its inherent reliability issues that prior to evidence being introduced which has been generated by an artificial intelligence product or system, counsel has an affirmative duty to disclose the use of artificial intelligence and the evidence sought to be admitted should properly be subject to a Frye hearing prior to its admission, the scope of which should be determined by the Court, either in a pre-trial hearing or at the time the evidence is offered.”

According to Judge Schopf, the Objectant failed to meet that burden: “In the instant case, the record is devoid of any evidence as to the reliability of Microsoft Copilot in general, let alone as it relates to how it was applied here. Without more, the Court cannot blindly accept as accurate, calculations which are performed by artificial intelligence.”This decision evinces the growing need to carefully scrutinize AI-generated evidence in legal proceedings. Courts are unlikely to admit this type of evidence at this early stage unless its reliability is established beforehand. As this technology becomes commonplace, these standards may evolve and become more elastic. Only time will tell. 

In the interim, take steps to proactively learn about AI tools so that you can advocate for or challenge their use in court effectively. By staying informed, you will be well-positioned to meet both the opportunities and challenges posed by AI-driven evidence. There’s no better time than now to get up to speed. Start learning about generative AI today to ensure that you are prepared for the future of law.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


Florida’s Professional Conduct Rules Will Include AI—But Was It Needed?

 

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

Florida’s Professional Conduct Rules Will Include AI—But Was It Needed?

When faced with the impact of a potentially disruptive technology, our profession follows a very predictable path: ignorance, indifference, overreaction, readjustment, begrudging acceptance, and finally, appreciation. With the sudden emergence and advancement of generative artificial intelligence (AI), the cycle has started, and all signs point to deep entrenchment in the overreaction phase.

OpenAI released ChatGPT 3.5 nearly two years ago, in November 2022. Since then, AI's effects have been inescapable, rapid, and significant, with unavoidable cascading changes occurring. Headlines about lawyers relying on generative AI tools and submitting briefs to courts that include fake case citations have only amplified already heightened and overblown concerns about AI.

These reactions are unsurprising given AI's wide-ranging potential to revamp core legal functions, from legal research and document drafting to litigation analytics and contract review. In the face of inevitable change, our profession is now focused on whether AI is a tool that will enhance their practices or a force that could undermine or even replace the practice of law as we know it.

In response to these concerns, several jurisdictions across the United States have formed AI committees, issued guidance, or authored opinions to help lawyers navigate a strange new world where AI runs rampant. More than eight states, including Florida, California, and Michigan, have taken formal steps to address AI’s role in legal practice. 

While these efforts are welcome to the extent that they help to encourage adoption, they are arguably unnecessary. Current rules and guidance on technology usage are more than sufficient.

The most recent efforts arose in Florida, where the Bar took the extreme step of modifying the Rules Regulating The Florida Bar to include references to generative AI. On August 29, the Florida Supreme Court adopted the amendments proposed by the Bar. These changes will go into effect on October 28.

One update was to the comment regarding the competency requirement. It now advises that lawyers must stay on top of technology changes, “including generative artificial intelligence.” 

Additionally, the duty of confidentiality now includes the obligation to “be aware that generative artificial intelligence may create risks to the lawyer’s duty of confidentiality.” 

Similarly, the duty of supervision now requires that supervising attorneys must “consider safeguards for the firm’s use of technologies such as generative artificial intelligence, and ensure that inexperienced lawyers are properly supervised.”

Finally, Rule 4-5.3, which addresses lawyers’ responsibilities regarding nonlawyer assistants, now requires that a “lawyer should also consider safeguards when assistants use technologies such as generative artificial intelligence.”

These amendments were unnecessary and unwise and will not withstand the tests of time. AI is simply a new tool. Other technologies preceded it, and new ones will follow. It is not the be-all and end-all of technology or our profession, and trying to ban or reduce its use by lawyers is a pointless, ineffectual endeavor that fails to serve the needs of our profession in the AI era.

The reactions by state bars to AI are entirely predictable. No matter the technology, our profession has tried to regulate it, from email and the Internet to social media and cloud computing. 

Demonizing new technology and banning its use have been par for the course. Eventually, however, a resigned acceptance set in as each tool became commonplace. AI will be no different. 

This same cycle is occurring with AI. Soon, we’ll move on from overreaction to the later phases, ultimately landing on appreciation. This process will happen much faster than it has in the past due to AI’s rapid rate of advancement. So buckle up, shore up your AI knowledge, and hold on, folks! The times are a-changin’ and quickly. Catch up while you still can!

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].

 


ILTA's 2024 Tech Survey: AI, Cloud, and the Tools Driving Law Firm Efficiency

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

ILTA's 2024 Tech Survey: AI, Cloud, and the Tools Driving Law Firm Efficiency

The 2024 International Legal Technology Association (ILTA) Technology Survey was recently released, and it provides a wealth of information on technology adoption trends in law firms. Not surprisingly, this year it includes data on how lawyers are implementing generative artificial intelligence (AI) into their firms. However, many other types of technology issues are addressed as well.

The report reveals how AI is currently being used in firms and provides data on plans for future investment in AI and other technologies. Areas addressed include cloud-based tools, software to streamline law firm operations, and technologies adopted to support remote work.

First, let’s take a look at the AI data. The survey results show that AI adoption has increased over the past year, with 37% of firms now using it compared to only 15% in 2023. The data also shows that larger law firms are leading the way, with 74% of firms of 700+ lawyers using AI tools in 2024. In comparison, only 20% of firms with fewer than 50 lawyers are exploring AI, followed by firms with 50-149 lawyers (27%), firms with 150-359 (36%), and firms with 350-699 (65%). 

Overall, firms are using AI to address various business needs. The top functions supported by AI include billing and accounting (31%), professional development (27%), litigation support (42%), and marketing and business development (49%). 

Lawyers are also relying on AI's benefits to increase efficiency in their daily workflows. The results show that research is expected to be the top use of generative AI in the next year, according to 73% of the respondents. Other popular use cases include summarizing complex documents (70% up from 48% in 2023), creating initial drafts of documents (69% up from 61% up from 55% in 2023), writing presentations (61% up from 55% in 2023) and drafting alerts or email notifications (50% up from 43% in 2023). 

Compared to last year, fewer legal professionals plan to use AI for creative tasks such as brainstorming ideas (46% down from 43%), writing/troubleshooting code (33% down from 36%), and generating strategic ideas (27% down from 28%). Despite those declines, the bulk of the data shows that there continues to be significant interest in the potential of generative AI and its promise of improving productivity firmwide.

The survey also explored remote collaboration technology adoption, seeking insight into the tools used most often in firms. The data showed that most firms now use video conferencing tools like Zoom and WebEx, with 94% of respondents reporting the availability of these platforms in their firms. Email (91%) and chat tools like Teams and Jabber (84%) are also widely used. Document-sharing functionality is gaining traction as well with 44% relying on these tools, reflecting the continued shift to digital workspaces.

In keeping with the shift to online collaboration, firms are also increasingly moving to a “paper-less” approach. Only 13% are not considering a shift to using digital documents. 49% report that their firms are “paper-lite,” 20% have paperless projects underway, 8% of firms are working on a paperless strategy, and 10% are fully paperless.

Finally, one of the most notable data points reflects this digital-first trend: the rapid rise of cloud-based tools post-pandemic. When it comes to cloud use, 43% of firms say they are "mostly in the cloud," while another 42% opt for a "cloud with every upgrade" approach. Only 2% of respondents indicated that they are “not yet comfortable with the cloud” down from 7% in 2021.

Overall, these findings from the 2024 ILTA Technology Survey highlight the legal industry's ongoing shift toward digital-first practices, with AI playing a key role in that transition. Firms are increasingly relying on AI, cloud-based tools, and remote collaboration technologies to streamline operations and support flexible work environments.

How does your firm compare? In today’s competitive legal marketplace, what steps are you taking to implement AI, cloud solutions, and digital collaboration tools into your firm’s IT stack to streamline efficiency, improve workflows, and provide better client service?

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


PayPal Fee Trust Account Mishap Results in NJ Disciplinary Reprimand 

PayPal Fee Trust Account Mishap Results in NJ Disciplinary Reprimand 

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

At the turn of the century, it was uncommon for lawyers to accept credit cards. Today, however, most law firms take advantage of the flexibility and convenience of online payments. According to the 2024 MyCase and LawPay Legal Industry Report, 78% of law firms now accept online payments via credit or debit card. Nearly half of those firms (44%) report that they collect $3,000 or more per month as a result. 

With the rise of online payments in the legal field and beyond, there has been a corresponding increase in the number of tools that enable law firms to accept credit card payments. Sifting through the many options available requires an analysis of features, processing fees, and ethical compliance, among other things. Identifying the right tool for your firm’s needs can be a challenging task, but many experts recommend choosing a payment platform designed for the needs of lawyers in order to avoid violating the many ethical requirements surrounding lawyer trust accounts.

Case in point: a recent case where a New Jersey lawyer was reprimanded due to an $18.90 overdraft of the firm’s trust account. The reprimand was recommended by the Disciplinary Review Board and imposed by the New Jersey Supreme Court on September 4th in The Matter of Michael A. Gorokhovich (D-70 September Term 2023 089080.

The reprimand stemmed from an overdraft caused by a $19.99 PayPal Business charge to his trust account. The record showed that two earlier debits were similar in nature, but those had not overdrawn the trust account and went unnoticed. The lawyer claimed he had not authorized the debits and closed his PayPal account to prevent future issues. 

The attorney was reprimanded for “failing to maintain receipts and disbursements journals; conduct monthly, three-way ATA reconciliations; maintain individual client trust ledgers; maintain a running cash balance in his ATA checkbook; and retain ABA and ATA records for seven years. Moreover, because of his inept recordkeeping practices, he failed to notice, let alone put a stop to, allegedly unauthorized electronic charges to his ATA until after the third such charge caused an overdraft in that account.” 

As a result of the reprimand, he was required to “(1) complete a recordkeeping course preapproved by the Office of Attorney Ethics within sixty days of this order; (2) submit proof to the Office of Attorney Ethics, within sixty days of this order, that the recordkeeping deficiencies identified during the audit have been corrected; and (3) provide to the Office of Attorney Ethics monthly reconciliations of respondent’s attorney accounts, on a quarterly basis, for two years…”

Using payment processing software specifically designed for lawyers could have prevented this type of mishap. Legal-specific payment platforms are designed with the complexities of law firm billing and trust accounting in mind and typically include built-in safeguards tailored to meet the ethical requirements surrounding attorney trust accounts. 

These tools automatically separate earned fees from unearned funds, protecting trust accounts from unauthorized and unethical debits. Similarly, these platforms prevent credit card processing and other fees from being withdrawn directly from attorney trust accounts, avoiding unauthorized debits and potential overdrafts similar to the one that triggered the reprimand in this case.

Another benefit of legal payment platforms is that they can include features that simplify compliance with trust accounting rules. Detailed transaction reports can be run that enable easy tracking of client funds and facilitate the three-way reconciliations required by most state bar associations. 

Payment processing tools built for legal professionals are designed to ensure the proper maintenance of accounting records. If your firm isn’t yet using a legal payment processing platform, the New Jersey disciplinary case clearly shows why now is the time to make that change. Legal-specific platforms help protect client funds, ensure trust account compliance, and prevent the ethical pitfalls that can arise with general payment processors like PayPal.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].

 

 


Technology Competence in the Age of Artificial Intelligence

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

Technology Competence in the Age of Artificial Intelligence

With technology evolving so quickly, powered by the rapid development of generative artificial intelligence (AI)  tools, keeping pace with change becomes all the more critical. For lawyers, the ethical requirement of maintaining technology competence plays a large part in that endeavor. 

The duty of technology competence is relatively broad, and the obligations required by this ethical rule can sometimes be unclear, especially when applied to emerging technologies like AI. Rule 1.1 states that a “lawyer should provide competent representation to a client.” The comments to this rule clarify that to “maintain the requisite knowledge and skill, a lawyer should . . . keep abreast of the benefits and risks associated with technology the lawyer uses to provide services to clients or to store or transmit confidential information.”

With the proliferation of AI, this duty has become all the more relevant, especially as trusted legal software companies begin to incorporate this technology into the platforms that legal professionals use daily in their firms. Lawyers seeking to take advantage of the significant workflow efficiencies that AI offers must ensure that they’re doing so ethically. 

That’s easier said than done. In today’s fast-paced environment, what is required to meet that duty? Does it simply require that you understand the concept of AI? Do you have to understand how AI tools work? Is there a continuing obligation to track changes in AI as it advances? If you have no plans to use it, can you ignore it and avoid learning about it? 

Fortunately for New York lawyers, there are now two sets of ethics guidance available: the New York State Bar’s April 2024 Report and Recommendations from the Taskforce on Artificial Intelligence ,and more recently, Formal Opinion 2024-5, which was issued by the New York City Bar Association.

The New State Bar’s guidance on AI is overarching and general, particularly regarding technology competence. As the “AI and Generative AI Guidelines” provided in the Report explains, lawyers “have a duty to understand the benefits, risks and ethical implications associated with the Tools, including their use for communication, advertising, research, legal writing and investigation.”

While instructive, the advice is fairly general, and intentionally so. As the Committee explained, AI is no different than the technology that preceded it, and thus, “(m)any of the risks posed by AI are more sophisticated versions of problems that already exist and are already addressed by court rules, professional conduct rules and other law and regulations.” 

For lawyers seeking more concrete guidance on technology competence when adopting AI, look no further than the New York City Bar’s AI opinion. In it, the Ethics Committee offers significantly more granular insight into technology competence obligations.

First, lawyers must understand that current generative AI tools may include outdated information “that is false, inaccurate, or biased.” The Committee requires that lawyers understand not only what AI is but also how it works. 

Before choosing a tool, there are several recommended courses of action. First, you must “understand to a reasonable degree how the technology works, its limitations, and the applicable [T]erms of [U]se and other policies governing the use and exploitation of client data by the product.” Additionally, you may want to learn about AI by “acquiring skills through a continuing legal education course.” Finally, consider consulting with IT professionals or cybersecurity experts.” 

The Committee emphasized the importance of carefully reviewing all responses for accuracy explaining that generative AI outputs “may be used as a starting point but must be carefully scrutinized. They should be critically analyzed for accuracy and bias.” The duty of competence requires that lawyers ensure the original input is correct and that they must analyze the corresponding response “to ensure the content accurately reflects and supports the interests and priorities of the client in the matter at hand, including as part of advocacy for the client.”

The Committee further clarified that you cannot delegate your professional judgment to AI and that you “should take steps to avoid overreliance on Generative AI to such a degree that it hinders critical attorney analysis fostered by traditional research and writing.” This means that all AI output should be supplemented “with human-performed research and supplement any Generative AI-generated argument with critical, human-performed analysis and review of authorities.”

If you plan to dive into generative AI, both sets of guidance should provide a solid roadmap to help you navigate your technology competence duties. Understanding how AI tools function and their limitations are essential when using this technology. By staying informed and applying critical judgment to the results, you can ethically leverage AI’s many benefits to provide your clients with the most effective, efficient representation possible.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].



Beyond Simple Tools: vLex's Vincent AI and the Future of Trusted Legal AI Platforms

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

Beyond Simple Tools: vLex's Vincent AI and the Future of Trusted Legal AI Platforms

There has been a noticeable shift in the way that legal technology companies are approaching generative artificial intelligence (AI) product development. Last year, several general legal assistant chatbots were released, mimicking the functionality of ChatGPT. Next came the emergence of single-point solutions, often developed by start-ups, to address specific workflow challenges such as drafting litigation documents, analyzing contracts, and legal research. 

As we approach the final quarter of 2024, established legal technology providers are more deeply integrating generative AI into comprehensive platforms, streamlining the user interface of legal research, practice management, and document management tools. Rather than standalone tools, generative AI is becoming a core feature of legal platforms, enabling users to access all their data and software seamlessly in one trusted environment.

A notable example is vLex, which acquired Fastcase last year and announced major updates to its Vincent AI product this week. Ed Walters, Chief Strategy Officer, described the update as the transformation from an AI-powered legal research and litigation drafting tool into a full-fledged legal AI platform.

This release expands workflows for transactional, litigation, and contract matters, enabling users to 1) analyze contracts, depositions, and complaints, 2) perform redline analysis and document comparisons, 3) upload documents to find related authorities, 4) generate research memoranda, 5) compare laws across jurisdictions, and 6) explore document collections to extract facts, create timelines, and more.

Similarly, both legal research companies Lexis Nexis and Thomson Reuters rolled out revamped versions of their generative AI assistants last month, reinforcing the trend toward AI-driven platforms. Lexis Nexis introduced Protégé, an AI assistant designed to meet each user’s specific needs and workflows, and serves as the gateway to a suite of Lexis Nexis products. Similarly, Thomson Reuters unveiled CoCounsel 2.0, an enhanced version of its AI assistant that was originally launched last year. Built on technology from its acquisition of Casetext’s CoCounsel, this upgraded legal assistant acts as the central interface for accessing many Thomson Reuters tools and resources, streamlining workflows across its products.

Despite the platform trend, single-point AI solutions remain valuable, especially for solo or small firms looking to streamline specific tasks like document analysis, drafting pleadings, or preparing discovery responses. These standalone tools continue to be developed and offer significant value for firms not already invested in a software ecosystem with integrated AI. If you’re in the market for an AI tool that accomplishes only one task, there’s most likely an AI tool available that fits the bill.

However, for many firms, AI integration into the software platforms they already use will likely be the most practical path forward. This approach helps to bridge the implementation gap and addresses common concerns about trust, which are often barriers to AI adoption. By partnering with trusted legal technology providers, firms can more comfortably adopt AI by leveraging the security and reliability of the platforms already in place.

With the deeper integration of AI into comprehensive legal platforms, the adoption process will become smoother, allowing legal professionals to enjoy the benefits of the reduced friction and tedium resulting from more streamlined law firm processes. This shift will allow legal professionals to focus on more meaningful work, improving both the practice of law and client service. 

Whether a standalone product or built into legal software platforms, generative AI offers significant potential, some of which is already being realized. It’s more than just another tool—it may very well redefine how law firms operate, paving the way for a more efficient and effective future.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


Legal Ethics in the AI Era: The NYC Bar Weighs In

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

Legal Ethics in the AI Era: The NYC Bar Weighs In

Since November 2022, when the release of ChatGPT was first announced, many jurisdictions have released AI guidance. In this column, I’ve covered the advice rendered by many state ethics committees, including California, Florida, New Jersey, Michigan, New York, Pennsylvania, Kentucky, the American Bar Association, and most recently, Virginia.

Now, the New York City Bar Association has entered the ring, issuing Formal Ethics Opinion 2024-5 on August 7th. The New York City Bar Association Committee of Professional Ethics mirrored the California Bar’s approach and provided general guidelines in a chart format rather than proscriptive requirements. The Committee explained that “when addressing developing areas, lawyers need guardrails and not hard-and-fast restrictions or new rules that could stymie developments.” Instead, the goal was to provide assistance to New York attorneys through “advice specifically based on New York Rules and practice…”

Regarding confidentiality, the Committee distinguished between “closed systems” consisting of a firm’s “own protected databases,” like those typically provided by legal technology companies, and systems like ChatGPT that share inputted information with third parties or use it for their own purposes. Client consent is required for the latter, and even with “closed systems,” confidentiality protections within the firm must be maintained. The Committee cautioned that the terms of use for a generative AI tool should be reviewed regularly to ensure that the technology vendor is not using inputted information to train or improve its product in the absence of informed client consent.

Turning to the duty of technology competence, the Committee opined that when choosing a product, lawyers “should understand to a reasonable degree how the technology works, its limitations, and the applicable [T]erms of [U]se and other policies governing the use and exploitation of client data by the product.” Also emphasized was the need to avoid delegating professional judgment to these tools and to consider generative AI outputs to be a starting point. Not only must lawyers ensure that the output is accurate, but they should also take steps to “ensure the content accurately reflects and supports the interests and priorities of the client in the matter at hand.”

The duty of supervision was likewise addressed, with the Committee confirming that firms should have policies and training in place for lawyers and other employees in the firm regarding the permissible use of this technology, including ethical and practical uses, along with potential pitfalls. The Committee also advised that any client intake chatbots used by lawyers on their websites or elsewhere on behalf of the firm should be adequately supervised to avoid “the risk that a prospective client relationship or a lawyer-client relationship could be created.”

Not surprisingly, the Committee required lawyers to be aware of and comply with any court orders regarding AI use. Another court-related issue addressed was AI-created deepfakes and their impact on the judicial process. According to the Committee’s guidance, lawyers must screen all client-submitted evidence to assess whether it was generated by AI, and if there is a suspicion “that a client may have provided the lawyer with Generative AI-generated evidence, a lawyer may have a duty to inquire.”

Finally, the Committee turned to billing issues, agreeing with other jurisdictions that lawyers may charge for time spent crafting inquiries and reviewing output. Additionally, the Committee explained that firms may not bill clients for time saved as a result of AI usage and that firms may want to explore alternative fee arrangements in order to stay competitive since AI may significantly impact legal pricing moving forward. Last but not least, any generative AI costs should be disclosed to clients, and any costs charged to clients “​​should be consistent with ethical guidance on disbursements and should comply with applicable law.” 

The summary above simply provides an overview of the guidance provided. For a more nuanced perspective, you should read the opinion in its entirety. Whether you’re a New York lawyer or practice elsewhere, this guidance is worth reviewing and provides a helpful roadmap for adoption as we head into an AI-led future where technology competence is no longer an option. Instead, it is an essential requirement for the effective and responsible practice of law.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


Practical and Adaptable AI Guidance Arrives From the Virginia State Bar 

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

Practical and Adaptable AI Guidance Arrives From the Virginia State Bar 

If you're concerned about the ethical issues surrounding artificial intelligence (AI) tools, the good news is that there's no shortage of guidance. A wealth of resources, guidelines, and recommendations are now available to help you navigate these concerns. 

Traditionally, bar associations have taken years to analyze the ethical implications of new and emerging technologies. However, generative AI has reversed this trend. Ethics guidance has emerged far more quickly, which is a very welcome change from the norm.

Since the general release of the first version of ChatGPT in November 2022, ethics committees have stepped up to the plate and offered much-needed AI guidance to lawyers at a remarkably rapid clip. Jurisdictions that have weighed in include California, Florida, New Jersey, Michigan, New York, Pennsylvania, Kentucky, and the American Bar Association. 

Recently, Virginia entered the AI ethics discussion with a notably concise approach. Unlike the often lengthy and detailed analyses from other jurisdictions, the Virginia State Bar issued a streamlined set of guidelines, available as an update on its website (accessible online at the bottom of the page: https://vsb.org/Site/Site/lawyers/ethics.aspx). This approach stands out not only for its brevity but also for its focus on providing practical, overarching advice. By avoiding the intricacies of specific AI tools or interfaces, the Virginia State Bar has ensured that its guidance remains flexible and relevant, even as the technology rapidly evolves.

Importantly, the Bar acknowledged that regardless of the type of technology at issue, lawyers’ ethical obligations remain the same: “(A) lawyer’s basic ethical responsibilities have not changed, and many ethics issues involving generative AI are fundamentally similar to issues lawyers face when working with other technology or other people (both lawyers and nonlawyers).”

Next, the Bar examined confidentiality obligations, opining that just as lawyers must review data-handling policies relating to other types of technology, so, too, must they vet the methods used by AI providers when handling confidential client information. The Bar explained that while legal-specific providers can often promise better data security, there is still an obligation to ensure a full understanding of their data management approach: “Legal-specific products or internally-developed products that are not used or accessed by anyone outside of the firm may provide protection for confidential information, but lawyers must make reasonable efforts to assess that security and evaluate whether and under what circumstances confidential information will be protected from disclosure to third parties.”

One area where the Bar’s approach conformed to that of most jurisdictions was client consent. While the ABA suggested explicit client consent when using AI was required in many cases, the Bar agreed with most other ethics committees, concluding that there “is no per se requirement to inform a client about the use of generative AI in their matter” unless there are extenuating circumstances like an agreement with the client or increased risks like those encountered when using consumer-facing products.

The Bar also considered supervisory requirements, emphasizing the importance of reviewing all output just as you would regardless of the source. According to the Bar, as “with any legal research or drafting done by software or by a nonlawyer assistant, a lawyer has a duty to review the work done and verify that any citations are accurate (and real)” and that that duty of supervision “extends to generative AI use by others in a law firm.”

Next, the Bar provided insight into the impact of AI usage on legal fees. The Bar agreed that lawyers cannot charge clients for the time saved as a result of using AI: “A lawyer may not charge an hourly fee in excess of the time actually spent on the case and may not bill for time saved by using generative AI. The lawyer may bill for actual time spent using generative AI in a client’s matter or may wish to consider alternative fee arrangements to account for the value generated by the use of generative AI.”

On the issue of passing the costs of AI software on to clients, the Bar concluded that doing so was not permissible unless the fee is both reasonable and “permitted by the fee agreement.”

Finally, the bar focused on a handful of recent court rules issued that forbid the use of AI for document preparation, highlighting the importance of being aware of and complying with all court disclosure requirements regarding AI usage.

The Virginia State Bar’s flexible and practical AI ethics guidance offers a valuable framework for lawyers as they adjust to the ever-changing generative AI landscape. By focusing on overarching principles, this thoughtful approach ensures adaptability as technology evolves. For those seeking reliable guidance, Virginia’s model offers a useful roadmap for remaining ethically grounded amid unprecedented technological advancements.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


The ABA Weighs in on the Ethical Use Of AI

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

The ABA Weighs in on the Ethical Use Of AI

Generative artificial intelligence (GenAI) is advancing at exponential rates. Since the release of GPT-4 less than two years ago, there has been an explosion of GenAI tools designed for legal professionals. With the rapid proliferation of software incorporating this technology comes increased concerns about ethical and secure implementation. 

Ethics committees across the country have stepped up to the plate to offer guidance to assist lawyers seeking to adopt GenAI into their firms. Most recently, the American Bar Association weighed in, handing down Formal Opinion 512 at the end of July. 

In its opinion, the ABA Standing Committee on Ethics and Professional Responsibility acknowledged the significant productivity gains that GenAI can offer legal professionals, explaining that GenAI “tools offer lawyers the potential to increase the efficiency and quality of their legal services to clients…Lawyers must recognize inherent risks, however." 

Importantly, the Committee also cautioned that when using these tools, lawyers “lawyers may not abdicate their responsibilities by relying solely on a GAI tool to perform tasks that call for the exercise of professional judgment.” In other words, while GenAI can significantly increase efficiencies, lawyers should not rely on it at the expense of their personal judgment.

Next, the Committee addressed the key ethical issues presented when lawyers incorporate GenAI tools into their workflows. First and foremost, technology competency was emphasized. According to the Committee, lawyers must stay updated on the evolving nature of GenAI technologies and have a reasonable understanding of the technology’s benefits, risks, and limitations.

Confidentiality obligations were also discussed, and the Committee highlighted the need to ensure that GenAI does not inadvertently expose client data and that systems should not be allowed to train on confidential data. Notably, the Committee required lawyers to obtain informed client consent before using these tools in ways that could impact client confidentiality, especially when using consumer-facing tools that train on inputted data.

The Committee also provided guidance on supervision requirements, advising that lawyers in managerial roles must ensure compliance with their firms’ established GenAI policies. The supervisory duty includes implementing policies, training personnel, and supervising the use of AI to prevent ethical violations.

The Committee highlighted the importance of reviewing all GenAI output to ensure its accuracy: “(D)uties to the tribunal likewise require lawyers, before submitting materials to a court, to review these outputs, including analysis and citations to authority, and to correct errors, including misstatements of law and fact, a failure to include controlling legal authority, and misleading arguments.”

Finally, the Committee offered insight into the ethics of legal fees charged when using GenAI to address client matters. The Committee explained that lawyers may charge fees encompassing the time spent reviewing AI-generated outputs but may not charge clients for time spent learning to use GenAI software. Importantly, it is impermissible for lawyers to invoice clients for time that would have been spent on work but for the efficiencies gained from using GenAI tools. In other words, clients can only be billed for the work completed, not for time saved due to GenAI.

Each new ethics opinion, like ABA Formal Opinion 512, offers much-needed guidance that enables lawyers to integrate AI tools into their firms thoughtfully and responsibly. By addressing emerging concerns and providing clear standards, these opinions reduce uncertainty and pave the way for forward-thinking lawyers to adopt GenAI confidently. While the ABA’s opinion is only advisory, it represents a positive trend of responsive guidance that arms the legal profession with the information needed to innovate ethically and adopt emerging technologies in today’s ever-changing AI era.

Nicole Black is a Rochester, New York attorney, author, journalist, and the Principal Legal Insights Strategist at MyCase, LawPay, CASEpeer, and Docketwise, AffiniPay companies. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


AI’s Role in Modern Law Practice Explored by Texas and Minnesota Bars

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

AI’s Role in Modern Law Practice Explored by Texas and Minnesota Bars

If you’re not yet convinced that artificial intelligence (AI) will change the practice of law, then you’re not paying attention. If nothing else, the sheer number of state bar ethics opinions and reports focused on AI released within the past two years should be a clear indication that AI’s effects on our profession will be profound.

Just this month, the Texas and Minnesota bar associations stepped into the fray, each issuing reports that studied the issues presented when legal professionals use AI. 

First, there was the Texas Taskforce for Responsible AI in the Law’s “Interim Report to the State Bar of Texas Board of Directors,” which addressed the benefits and risks of AI, along with recommendations for the ethical adoption of these tools.

The Minnesota State Bar Association (MSBA) Assembly’s report, “Implications of Large Language Models (LLMs) on the Unauthorized Practice of Law (UPL) and Access to Justice,” assessed broader issues related to how AI could potentially impact the provision of legal services within our communities. 

Despite the divergence in focus, the reports covered a significant overlap of topics. For example, both reports emphasized the ethical use of AI and the importance of ensuring AI increases rather than reduces access to justice.

However, approaches to both issues differed. While the Texas Taskforce sought to develop guidelines for ethical AI use, the MSBA report suggested that there was no need to reinvent the wheel and that existing ethical guidance issued by other jurisdictions about AI tools like LLMs was likely sufficient to assist Minnesota legal professionals in navigating AI adoption.

There was also a joint focus on access to justice. Both reports included an emphasis on the value of ensuring that AI tools enhance access to justice. The Texas Taskforce highlighted the need to support legal aid providers in obtaining access to AI. At the same time, the MSBA’s Assembly recommended the creation of an “Access to Justice Legal Sandbox” that “would provide a controlled environment for organizations to use LLMs in innovative ways, without the fear of UPL prosecution.”

Overall, the MSBA Assembly’s approach was more exploratory, while the Texas Taskforce’s was more advisory. The MSBA Assembly’s report included recommendations to take more detailed, actionable steps like creating an AI regulatory sandbox, launching pilot projects, and creating a Standing Committee to consider recommendations made in the report.  In comparison, the Texas Taskforce identified broader goals such as raising awareness of cybersecurity issues surrounding AI, emphasizing the importance of AI education and CLEs, and proposing AI implementation best practices.

The issuance of these reports on the tails of other bar association guidance represents a significant step forward for the legal profession. While we’ve historically resisted change, we’re now looking forward rather than backward. Bar associations are rising to the challenge during this period of rapid technological advancements and providing lawyers with much-needed, practical guidance and advice designed to help them navigate the ever-changing AI landscape.  

While Texas focuses on comprehensive guidelines and educational initiatives, Minnesota’s approach includes regulatory sandboxes and pilot projects. These differing strategies reflect a shared commitment to ensuring AI enhances access to justice and improves the lives of legal professionals. Together, these efforts indicate a profession that is, at long last, willing to adapt and innovate by leveraging emerging technologies to better serve society and uphold justice in an increasingly digital-first age.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase legal practice management software and LawPay payment processing, AffiniPay companies. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].

 


Pre-Trial AI Tools For Lawyers

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

Pre-Trial AI Tools For Lawyers

I often receive emails from lawyers who reach out after having read one of my articles about generative artificial intelligence (AI) tools. They frequently seek advice about implementing AI software in their firm. Some focus on ethics and accuracy concerns, while others ask for my input on which tools to use to address a workflow issue in their firm. These communications can sometimes inspire me to write articles about a specific type of AI software since it’s a safe bet that other lawyers may be struggling with the same issue.

Recently, many emails have focused on pre-trial AI tools for lawyers. This makes sense since AI tools can streamline many repeatable and tedious tasks involved during the discovery and motion stages of a case. 

Of course, both legalese and litigation processes are complex, which means that consumer-focused generative AI tools such as ChatGPT or Claude are often inadequate, producing less-than-ideal output. Fortunately, legal technology companies that thoroughly understand legal workflows and lawyers’ unique needs are much better positioned to develop tools that streamline pre-trial workflows and generate reliable and useful content.

Because AI can address many of the pain points encountered during the early stages of litigation, it’s no surprise that AI software tools have been released over the past year that can address many pre-litigation workflow challenges. 

Now that those products are available let’s review some of the top categories. Note that I have not tested out most of these tools and only provide information regarding available software. You must carefully vet the providers and take advantage of free trials and demos offered before settling on a tool. To assist with the vetting process, you’ll find a list of suggested questions to ask legal cloud and AI providers here. 

The first category of AI tools we’ll consider is pre-trial discovery management. This software automates the tedious and redundant process of preparing routine pleadings, discovery requests, and discovery responses. Upon uploading complaints and other legal documents, this software will typically generate responsive pleadings, such as answers, interrogatories, requests for admission, and document requests and responses. The following AI software can assist in drafting these types of documents: Legalmation, Ai.law, Briefpoint, and LexthinkAI. 

Next are AI tools for deposition summary and analysis. This software leverages AI algorithms to reduce the time spent reviewing and obtaining insights from deposition transcripts. Automating these tasks significantly streamlines the review process, allowing for more efficient case preparation and strategy development. Tools that offer this functionality include Legalmation, Lexthink.ai, Casemark, Lexis + AI, and  CoCounsel, a Thomson Reuters company.

Finally, there are AI tools that assist with brief drafting and analysis. AI technology is beneficial in this context since it can help edit text, improve writing, and change tone, reducing the time needed to write complex legal documents. These tools typically function within word processing tools such as Word. Products that assist with brief writing functions include Clearbrief, Briefcatch, EZBriefs, and Wordrake.

Pre-trial AI tools are more than document robots; they're powerful allies that reduce friction and enhance efficiency. Even if you’re not yet ready to invest in these tools at this early stage, it’s worthwhile to arm yourself with information about this category of software for future reference. They will undoubtedly increase in sophistication over time and have great potential. With these tools on your side, either now or down the road, you’ll be able to focus on crafting winning arguments while AI tackles tedious pre-trial tasks. The result? Less stress, happier lawyers and clients, and a future-proofed legal practice.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase legal practice management software and LawPaypayment processing, AffiniPay companies. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].

 

 

 

 

 

 

 


New Report Highlights GenAI Adoption Trends in Law

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

New Report Highlights GenAI Adoption Trends in Law

For legal professionals facing an ever-evolving technology landscape shaped by rapid advancements in artificial intelligence, data-driven decisions are the key to successful adaptation. Because change is occurring quickly, up-to-date information is key. That’s where the Thomson Reuters Institute 2024 Generative AI in Professional Services Report comes in.

This report highlights how professionals, including lawyers, view and use generative AI (GenAI). It offers insights into legal professionals’ attitudes and adoption rates and provides law firm leaders with timely industry data. Using this information, you can make informed choices about when and how to implement GenAI in your firm.

First, let’s consider legal professionals’ perspectives on GenAI. The report shows that only a slight majority view it as an appropriate tool for use in a law firm. 85% of legal professionals believe AI could be applied to their work, while only 51% say it should.

Data from the report also indicates that ethical concerns about the unauthorized practice of law could drive some of the reticence surrounding GenAI. The majority (77%) of legal respondents cited this issue as either a significant threat or somewhat of a threat to the profession.

Our judicial counterparts are even more cautious about incorporating GenAI into their workflows, with 60% having no current plans to use it and only 8% currently experimenting with it

Also notable is that legal-specific GenAI tools are not yet mainstream in our profession. According to the Report, only 12% of legal professionals report using legal-specific GenAI tools today, but 43% plan to do so within the next three years. In comparison, consumer GenAI tools are more popular presently, with 27% of legal industry respondents using them and another 20% planning to do so within the next three years. In other words, within a few years, the adoption of legal-specific tools will far outpace that of consumer tools in the legal space, and rightly so, since legal providers have a far better understanding of the unique needs of legal professionals.

For those currently using GenAI, top use cases in law firms currently include legal research, document review, brief or memo drafting, document summarization, and correspondence drafting.

Data from the report showed that compared to their law firm counterparts, corporate legal departments are more document-focused in their GenAI usage. Contract drafting comes in first, followed by document review, legal research, document summarization, and extracting contract data.

Similarly, government and court respondents also focused primarily on leveraging GenAI tools to work with documents. Use cases included legal research, document review, document summarization, brief or memo drafting, and contract drafting.

Another interesting data point from this report revolved around perspectives on shifting the cost of GenAI tools when used to provide legal services. According to the report, law firms report primarily absorbing GenAI investment costs as firm overhead (51%), with a smaller portion passing the costs to customers on a case-by-case basis (16%) or across the board (9%). 4% use other methods, and 20% have not yet determined their approach.

Alternative pricing for legal services was also discussed, with more than a third of respondents(39%) sharing that GenAI may result in an increase in the use of alternative fees. Even so, another 28% were unclear as to how GenAI adoption might impact law firm billing moving forward.

Last but not least, recruitment. 45% of legal professionals surveyed indicated that their firms do not plan to target applicants with AI or GenAI skills (45%), while 17% identified it as a "nice to have" skill. Only 2% said their firms would require it.

If you haven’t read this report, now’s the time. It provides valuable data that highlights the growing awareness of AI's potential impact on our profession, even though adoption rates vary. Many legal professionals see the value of AI but remain cautious about fully embracing it. 

The findings from this report offer valuable insights that can guide law firm leaders in making informed decisions about integrating AI into their firm’s workflows. As AI technology advances, insights like these will help you strategically decide when and how to implement GenAI, ultimately shaping the future of your law practice.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase legal practice management software and LawPaypayment processing, AffiniPay companies. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].

 


Balancing Innovation and Ethics: Kentucky Bar Association’s Preliminary Stance on AI for Lawyers

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

Balancing Innovation and Ethics: Kentucky Bar Association’s Preliminary Stance on AI for Lawyers

The rapid advancement of generative artificial intelligence (AI) technology has had many effects, one of which has been to spur bar association ethics committees into action. In less than two years, at least eight jurisdictions have issued AI guidance in one form or another, including California, Florida, New Jersey, Michigan, and New York, which I’ve covered in this column. 

Most recently, I discussed a joint opinion from the Pennsylvania Bar Association Committee on Legal Ethics and Responsibility and the Philadelphia Bar Association Professional Guidance Committee, Joint Formal Opinion 2024-200, and promised to subsequently tackle Kentucky’s the Kentucky Bar Association’s March opinion, Ethics Opinion KBA E-457, which I’ll cover today. 

This opinion was issued in March and was published to the KBA membership in the May/June edition of the Bench & Bar Magazine. After the 30-day public comment period expires, it will become final.

This opinion covers a wide range of issues, including technology competency, confidentiality, client billing, notification to courts and clients regarding AI usage, and the supervision of others in the firm who use AI. 

Notably, when providing the necessary context for the guidance provided, the Committee wisely acknowledged that hard and fast rules regarding AI adoption by law firms are inadvisable since the technology is advancing rapidly, and every law firm will use it in different, unique ways: “The Committee does not intend to specify what AI policy an attorney should follow because it is the responsibility of each attorney to best determine how AI will be used within their law firm and then to establish an AI policy that addresses the benefits and risks associated with AI products. The fact is that the speed of change in this area means that any specific recommendation will likely be obsolete from the moment of publication.”

Accordingly, the Committee’s advice was fairly elastic and designed to change with the times as AI technology improves. The Committee emphasized the importance of maintaining technology competency, which includes staying “abreast of the use of AI in the practice of law,” along with the corresponding duties to continually take steps to maintain client confidentiality and to carefully “review court rules and procedures as they relate to the use of AI, and to review all submissions to the Court that utilized Generative AI to confirm the accuracy of the content of those filings.”

As other bar associations have done, the Kentucky Bar Ethics Committee also highlighted the issues surrounding client communication and billing when using AI to streamline legal work. 

Departing from the hard and fast requirement that some bars have put in place regarding notifying clients whenever AI is used in their matter, the Committee took the more moderate approach. It required that lawyers do so only under certain circumstances. The Committee explained that there is no “ethical duty to disclose the rote use of AI generated research for a client's matter unless the work is being outsourced to a third party; the client is being charged for the cost of AI; and/or the disclosure of AI generated research is required by Court Rules.” 

Next, the Committee determined that when invoicing a client for work performed more efficiently when using AI, lawyers should “consider reducing the amount of attorney's fees being charged the client when appropriate under the circumstances.” Similarly, lawyers may pass on expenses related to AI software if there is an acknowledgment in writing whereby the client agrees in advance to reimburse the attorney for the attorney's expense in using AI.” However, the Committee cautioned that the “costs of AI training and keeping abreast of AI developments should not be charged to clients.”

Finally, the Committee confirmed that lawyers who are partners or managers have a duty to ensure the ethical use of AI by other lawyers and employees, which involves appropriate training and supervision.

This opinion provides a thorough analysis of the issues and sound advice regarding AI usage in law firms. I’ve only hit the high points, so make sure to read the entire opinion for the Committee’s more nuanced perspective, especially if you are a Kentucky attorney. AI is here to stay and will inevitably impact your practice, likely much sooner than you might expect, given the rapid change we’re now experiencing. Invest time into learning about this technology now, so you can adapt to the times and incorporate it into your law firm, ultimately providing your clients with more efficient and effective representation.

Nicole Black is a Rochester, New York attorney, author, journalist, and the Head of SME and External Education at MyCase legal practice management software and LawPaypayment processing, AffiniPay companies. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


More AI Ethics Guidance Arrives With Pennsylvania Weighing In

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

More AI Ethics Guidance Arrives With Pennsylvania Weighing In

The rate of technological change this year has been off the charts. Lately, there’s daily news of generative artificial intelligence (AI) announcements about new products, feature releases, or acquisitions. Advancement has been occurring at such a rapid clip that it’s more challenging than ever to keep up with the pace of change — blink, and you’ll miss it!

Given how quickly AI has infiltrated our lives and profession, it’s been all the more impressive to watch bar association professional disciplinary committees step up to the plate and issue timely, much-needed guidance. Even though generative AI has been around for less than two years, California, Florida, New Jersey, Michigan, and New York had already issued GenAI guidance for lawyers as of April 2024.

Just a few months later, two other states, Pennsylvania and Kentucky, have weighed in, providing lawyers in their jurisdictions with roadmaps for ethical AI usage. Today, I’ll discuss the Pennsylvania guidance and will cover Kentucky’s in my next article.

On May 22, the Pennsylvania Bar Association Committee on Legal Ethics and Responsibility and the Philadelphia Bar Association Professional Guidance Committee issued Joint Formal Opinion 2024-200. In the introduction to the opinion, the joint Committee explained why it is critical for lawyers to learn about AI: “This technology has begun to revolutionize the way legal work is done, allowing lawyers to focus on more complex tasks and provide better service to their clients…Now that it is here, attorneys need to know what it is and how (and if) to use it.” A key way to meet that requirement is to take advantage of “continuing education and training to stay informed about ethical issues and best practices for using AI in legal practice.”

The joint Committee emphasized the importance of understanding both the risks and benefits of incorporating AI into your firm’s workflows. It also stated that if used appropriately and “with appropriate safeguards, lawyers can utilize artificial intelligence” in a compliant manner. 

The opinion included many recommendations and requirements for lawyers planning to use AI in their practices. First and foremost, the Committees emphasized basic competence and the need to “ensure that AI-generated content is truthful, accurate, and based on sound legal reasoning.” This obligation requires lawyers to confirm “the accuracy and relevance of the citations they use in legal documents or arguments.” 

Another area of focus was on protecting client confidentiality. The joint Committee opined that lawyers must take steps to vet technology providers with the end goal being to “safeguard information relating to the representation of a client and ensure that AI systems handling confidential data adhere to strict confidentiality measures.”

Notably, the joint Committee highlighted the importance of ensuring that AI tools and their output are unbiased and accurate. This means that when researching a product and provider, steps must be taken to “ensure that the data used to train AI models is accurate, unbiased, and ethically sourced to prevent perpetuating biases or inaccuracies in AI-generated content.”

Transparency with clients was also discussed. Lawyers were cautioned to ensure clear communication “with clients about their use of AI technologies in their practices…(including) how such tools are employed and their potential impact on case outcomes.” Lawyers were also advised to clearly communicate with clients about AI-related expenses, which should be “reasonable and appropriately disclosed to clients.”

This guidance — emphasizing competence, confidentiality, and transparency —is a valuable resource for lawyers seeking to integrate AI into their practices. This timely advice helps ensure ethical AI usage in law firms, especially for Pennsylvania practitioners. For even more helpful ethics analysis, stay tuned for my next article, where we’ll examine Kentucky's recent AI guidance.

Nicole Black is a Rochester, New York attorney, author, journalist, and the Head of SME and External Education at MyCase legal practice management software and LawPaypayment processing, AffiniPay companies. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].

 

 




The GenAI Courtroom Conundrum

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

The GenAI Courtroom Conundrum

In the wake of ChatGPT-4's release a year ago, there has been a notable uptick in the use of generative artificial intelligence (AI) tools by lawyers when drafting court filings. However, with this embrace of cutting-edge technology there has been an increase in well-publicized incidents involving fabricated case citations.

Here is but a sampling of incidents from earlier this year:

  • A Massachusetts lawyer faced sanctions for submitting multiple memoranda riddled with false case citations (2/12/24).
  • A British Columbia attorney was reprimanded and ordered to pay opposing counsel's costs after relying on AI-generated "hallucinations" (2/20/24).
  • A Florida lawyer was suspended by the U.S. District Court for the Middle District of Florida for filing submissions based on fictitious precedents (3/8/24).
  • A pro se litigant's case was dismissed after the court called them out for submitting false citations for the second time (3/21/24).
  • The 9th Circuit summarily dismissed a case without addressing the merits due to the attorney's reliance on fabricated cases (3/22/24).

Judges have been understandably unhappy with this trend, and courts across the nation have issued a patchwork of orders, guidelines, and rules regulating the use of generative AI in their courtrooms. According to data collected by RAILS (Responsible AI in Legal Services) in March, there were 58 different directives on record at that time.

This haphazard manner of addressing AI usage in courts is problematic. First, it fails to provide the much-needed consistency and clarity. Second, it evinces a lack of understanding about the extent to which AI has been embedded within many technology products used by legal professionals for many years now — in ways that are not always entirely transparent to the end user.

Fortunately, as this technology has become more commonplace and is better understood, some of our judicial counterparts have recently begun to revise their perspective, offering newfound approaches to AI-supported litigation filings. They have wisely decided that rather than over-regulating the technology, our profession would be better served if there was reinforcement of existing rules that require due diligence and careful review of all court submissions, regardless of the tools employed.

For example, earlier this month, the Fifth Circuit U.S. Court of Appeals in New Orleans reversed course and chose not to adopt a proposed rule that would have required lawyers to certify that if an AI tool was used to assist in drafting a filing, all citations and legal analysis had been reviewed for accuracy. Lawyers who violated this rule could have faced sanctions and the risk that their filings would be stricken from the record. In lieu of adopting the rule, the Court advised lawyers to ensure “that their filings with the court, including briefs…(are) carefully checked for truthfulness and accuracy as the rules already require.”

In another case, a judge for the Eleventh Circuit U.S. Court of Appeals used ChatGPT and other generative AI tools to assist in writing his concurrence in Snell v. United Specialty Insurance Company, No. 22-12581. In his concurrence he explained that he used the tools to aid in his understanding of what the term “landscaping” meant within the context of the case. The court was tasked with determining whether the installation of an in-ground trampoline constituted “landscaping” as defined by the insurance policy applicable to a negligence claim. Ultimately he found that the AI chatbots did, indeed, provide the necessary insight, and referenced this fact and in the opinion.

In other words, the time they are a’changin’. The rise of generative AI in legal practice has brought with it significant challenges, but reassuringly, the legal community is adapting.  The focus is beginning to shift from restrictive regulations towards reinforcing existing ethical standards, including technology competence and due diligence. Recent developments suggest a balanced approach is emerging—acknowledging AI's potential while emphasizing lawyers' responsibility for accuracy. This path forward strikes the right balance between technological progress and professional integrity, and my hope is that more of our esteemed  jurists choose this path.

Nicole Black is a Rochester, New York attorney, author, journalist, and the Head of SME and External Education at MyCase legal practice management software and LawPaypayment processing, AffiniPay companies. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].