GenAI, Talent, and Remote Work: Legal Industry Trends from the 2024 Wolters Kluwer Survey

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

GenAI, Talent, and Remote Work: Legal Industry Trends from the 2024 Wolters Kluwer Survey

The 2024 Wolters Kluwer Future Ready Lawyer Report was released last month and highlights key trends impacting the legal profession. The survey findings from legal professionals across the U.S. and Europe reveal how organizations are addressing efficiency, regulatory pressures, and evolving client needs to stay competitive in a rapidly changing environment. Topics covered include the integration of generative AI (GenAI) into legal workflows, changing remote work expectations, and the value of work-life balance for talent recruitment and retention.

First, let’s examine the GenAI data. The survey results showed that at least 76% of legal professionals in corporate legal departments and 68% in law firms use GenAI weekly, with 35% and 33%, respectively, using it daily. There are implementation challenges, however, and (37%) of law firm employees and 42% of their corporate counterparts report issues integrating GenAI with their organization’s existing legal systems and processes.

Another notable set of statistics revolved around GenAI’s potential effect on, and potential to disrupt, the almighty billable hour. A surprising 60% of those surveyed expect AI-driven efficiencies to reduce the prevalence of the billable hour moving forward, and 20% predict it will have a significant impact. Fortunately, more than half of the legal professionals surveyed (56%) feel well-prepared to adapt their business practices, service offerings, workflows, and pricing models in response to AI’s potential impact on the traditional billable hour business model.

Additionally, 65% of legal professionals anticipated increased organizational investment in AI technology over the next three years, with 71% anticipating that GenAI’s rapid development will continue impacting firms and corporate legal departments during that same timeframe. 31% believe it will have a significant effect, with 69% feeling generally prepared to manage this impact. Only 26% consider themselves “very prepared.”These findings are evidence of the significant interest in this GenAI, driven by its time-saving benefits. However, trepidation exists regarding the pace of change, implementation challenges, and the levels of investment and training needed to keep up with a rapidly changing technology landscape.

In addition to GenAI trends, perspectives on changing talent acquisition and retention trends were also explored. One positive finding was that 80% of respondents believe their workplaces are equipped to address the need for talent attraction. Key factors cited as legal talent draws included an acceptable work-life balance (81%), competitive compensation packages (79%), and opportunities for professional development and training (79%).

Interestingly, employees surveyed reported that work culture is particularly important in attracting legal talent. Nearly 72% of respondents shared that they valued diverse and inclusive workplaces, and 75% believed their organizations fostered such environments.

Finally, remote work trends were also addressed. The survey results revealed a global trend toward returning to the office, despite the employee push-back often reported in the media. Most respondents (73%) reported that staff are required to work in the office four or more days per week, with this figure higher in corporate legal departments (77%) compared to law firms (69%).

This year’s survey results highlight the legal industry’s efforts to balance the adoption of cutting-edge technologies like GenAI with ethical and implementation challenges. Furthermore, issues like workforce retention remain significant, with the data showing that organizations must prioritize innovation, adaptability, and strategic investments in training and technology. In the midst of rapid technological and societal change, the importance of proactive planning and technology adoption cannot be overstated for organizations seeking to remain competitive and positioned for success in the years to come.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, LawPay, CASEpeer, and Docketwise, AffiniPay companies. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].

 

 

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


Closing the Justice Gap: How Courts Are Leveraging GenAI for Greater Accessibility

Closing the Justice Gap: How Courts Are Leveraging GenAI for Greater Accessibility

Last week, I wrote about the release of recent judicial guidance for judges and court personnel seeking to use generative artificial intelligence (GenAI) to assist with the administration of justice. The guidance provided a roadmap for the ethical implementation of GenAI into the judiciary's workflows. 

This week, let’s discuss another relevant use case for GenAI in the courts: expanding access to justice by making the court system more accessible.

For years, courts have grappled with high caseloads and limited resources. More recently, the number of self-represented litigants has increased due to reduced federal support for legal aid organizations. With this influx of pro se litigants comes an increased demand for legal and procedural information. In the past, court websites and directories provided some assistance but were often difficult to navigate. Obtaining relevant information about court processes continued to be challenging.

Enter GenAI, which has emerged as a powerful tool with the potential to help bridge the access-to-justice gap. One of the most notable benefits of GenAi when applied to a database of information, such as court documents and data, is that it provides a user-friendly interface in the form of a responsive, knowledgeable chatbot. What was once challenging to unearth becomes quick and easy to access.

These GenAI interfaces can make all the difference to our overloaded court systems. Once deployed, they simplify and streamline complex court instructions and processes, from translating complex legal language and providing easy access to templates and court forms to enhancing public understanding of the court system. 

With GenAI, courts can eliminate procedural barriers and provide much-needed information, reduce administrative burdens, and empower pro se individuals with the tools needed to navigate our judicial system. For courts willing to embrace change and take advantage of all these tools, the benefits can be significant.

A few courts have already deployed GenAI-powered chatbots. For example, Nevada courts recently introduced a generative AI-powered chatbot designed to deliver plain-language legal guidance in multiple languages. Developed by CiviLaw.Tech for the Nevada Supreme Court, this generative AI-powered chatbot provides clear, concise, and personalized responses to common legal questions, helping individuals understand their options and the procedural steps they need to take. 

Similarly, Lemma Legal recently developed Missouri Tenant Help, an online resource for Missouri tenants seeking legal support. The platform includes an intake screening tool that incorporates the advanced GenAI language processing of GPT-4. This approach helps users determine their eligibility for assistance before speaking with program staff. Adding genAI to the intake process has removed a key barrier for tenants needing legal help, allowing them to understand their options quickly and easily.

These early court adopters of GenAI are finding that, with careful oversight, generative AI can not only make legal resources more accessible but also improve the efficiency and effectiveness of courts as a whole. While challenges remain—especially around ethical implications, data privacy, and accuracy—GenAI interfaces present unique opportunities to democratize access to justice. 

As courts continue to experiment with and refine these tools, the hope is that legal services will become more readily available and tailored to the needs of all individuals, regardless of their background or resources. Will this actually happen, or is it a pie-in-the-sky pipe dream? I tend toward cynicism, but every effort counts and moves us one step closer to a more equitable and accessible judicial system. Only time will tell if GenAi will truly help bridge the access to justice gap.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].

 


Judicial Ethics: Navigating the AI Era

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

Judicial Ethics: Navigating the AI Era

Over the past two years, generative artificial (AI) ethics guidance has been plentiful, with many State Bars swiftly responding to the increasing need for AI adoption advice. Within months of ChatGPT’s initial release in November 2022, the risks of using generative AI in legal practice were alarmingly clear, as captured in numerous sensational headlines. The benefits were also evident, with the speed of adoption outpacing the necessary learning curve needed to utilize these tools competently. As more AI ethics opinions were issued, a clear path to ethical adoption emerged for lawyers.

But what about judges and court staff? Generative AI offers obvious benefits that could significantly increase efficiencies and remove tedium from their daily workflows by streamlining legal research and the drafting of orders and opinions. Of course, ethical implementation of AI by the courts is essential, and while the risks presented are similar to those encountered by lawyers, there are also considerations unique to the judiciary.

The good news is that some guidance is available. For starters, in October 2023, two different judicial ethics opinions were released. The first was JIC Advisory Opinion 2023-22.  It was issued on October 13, 2023 by the West Virginia Judicial Investigation Commission. 

The Commission determined that judges may use AI for research purposes but not when deciding the outcome of a case. Additionally, the Committee advised that extreme caution should be taken when using AI to assist with drafting orders or opinions. Finally, the Commission emphasized the importance of maintaining technology competence when using AI, clarifying that the duty was ongoing.

Later that month, on October 27, 2023, judicial ethics opinion JI-155 was issued in Michigan. The focus of this opinion was technology competence. Like the West Virginia opinion, judges were advised to maintain technology competence regarding technology, including AI: “(J)udicial officers have an ethical duty to maintain technological competence and understand AI’s ethical implications to ensure efficiency and quality of justice (and) take reasonable steps to ensure that AI tools on which their judgment will be based are used properly and that the AI tools are utilized within the confines of the law and court rules.”

More recently, Delaware and Georgia issued orders addressing the judiciary's use of AI. On October 21, 2024, the Delaware Supreme Court adopted an interim AI policy for judges and court personnel (online: https://courts.delaware.gov/forms/download.aspx?id=266848). It requires users to maintain technology competence and outlines the appropriate usage of authorized AI tools, including the requirement that “(u)sers may not delegate their decision-making function to Approved GenAI.” 

The State of Georgia’s Order related to the formation of its Ad Hoc Committee on Artificial Intelligence (online: https://jcaoc.georgiacourts.gov/wp-content/uploads/2024/10/AI_Committee_Orders.pdf). The Order appointed sixteen people to the committee whose mission is to assess “the risks and benefits of the use of generative AI on the courts and to make recommendations to ensure that the use of AI does not erode public trust and confidence in the judicial system.”

While guidance for the judiciary has been less plentiful, it remains valuable. These guidelines offer a clear roadmap for adopting AI responsibly, ensuring that judicial integrity is preserved throughout implementation. As AI technology advances rapidly, the judiciary must keep pace by leveraging AI’s potential to streamline processes and improve the quality of justice. By committing to continuous education and adhering to these standards, courts can gain the benefits of AI while upholding judicial integrity and maintaining public trust.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].

 

 


Amid a Flurry of AI Ethics Opinions, New Mexico Weighs In

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

Amid a Flurry of AI Ethics Opinions, New Mexico Weighs In

Did you know that in less than two years, more than ten U.S. jurisdictions have issued guidance on generative artificial intelligence (AI)? For the past two decades, I’ve written about legal technology. My goal has always been to help legal professionals navigate the twists and turns of 21st-century innovations. From blogging and social media to cloud and mobile computing, I’ve encouraged members of my profession to actively learn about and implement technology into their practices.

Initially, my efforts felt like swimming upstream. Very few colleagues were receptive, and only the most tech-savvy showed interest in new tools and platforms. Adoption rates were slow, and my attempts to educate were often met with indifference.

Then, in early 2020, the pandemic struck, forcing lawyers to work remotely, conduct meetings online, and rely heavily on cloud-based tools. Attitudes shifted almost overnight, leading to a dramatic spike in technology adoption.

In many ways, the pandemic had the effect of priming legal professionals to be open to new tools and ways of working. This change of heart could not have come at a better time, and when generative artificial intelligence (AI) was unleashed, attorneys were immediately receptive and curious about its potential to streamline their workflows and increase law firm profitability.

When GPT 3.0 was released in November 2022, it amounted to a technological tidal wave whose impact on the practice of law continues to be felt today. The amount of ethics guidance handed down over the past two years focused on a single technology is unprecedented. This rapid response reflects both heightened concerns about potential risks and the acknowledgment of the potentially significant impact that AI could have on the practice of law. By my count, at least eleven jurisdictions have issued guidance or opinions on the ethics of using AI in law firms: California, Florida, New Jersey, Michigan, New York, Pennsylvania, Kentucky, the American Bar Association, Virginia, and D.C.

Most recently, New Mexico joined their ranks, issuing Formal Ethics Advisory Formal Opinion 2024-005 (Online: https://www.sbnm.org/Portals/NMBAR/GenAI%20Formal%20Opinion%20-%20Sept_2024_FINAL.pdf). At issue was whether lawyers may use generative AI in the practice of law. The short answer? Yes.

The State Bar of Mexico Ethics Advisory Committee determined that generally speaking, “the responsible use of Generative AI is consistent with lawyers’ (ethical) duties.” According to the committee, generative AI offers many potential benefits for lawyers and their clients, increasing efficiency and reducing costs for clients. 

The Committee offered a number of examples of use cases, which include the initial drafting of legal documents and routine correspondence, assisting with drafting complex contracts or cross-examining witnesses, and streamlining discovery. 

Importantly, the Committee clarified that lawyers are not required to use this technology, but “those lawyers who choose to do so…must do so responsibly, recognizing that the use of Generative AI does not change their fundamental duties under the Rules of Professional Conduct.” 

Interestingly, the Committee offered a unique take on the risk of law firm data being used to train AI models. According to the Committee, conflict of interest issues could be triggered when using generative AI since “there is a risk that future outputs may use information relating to the prior representation or concurrent representation by another lawyer in the same firm in a way that disadvantages the prior/other client.” The Committee cautioned that if lawyers are unable to verify a lack of a conflict, they should avoid inputting confidential client data into a generative AI tool unless they’ve confirmed that the tool possesses safeguards that “protect prior client information and…screen potential conflicts.”

The Committee also addressed many other ethical issues that are implicated when lawyers use generative AI, including confidentiality, candor toward the tribunal, AI costs and billing, and supervisory issues. Make sure to read the full opinion for their in-depth analysis of these topics, especially if you happen to practice law in New Mexico.

No matter where you practice, one thing is clear: keeping up with the pace of change is essential. Given generative AI's rapid advancement, it is more important than ever to stay informed, uphold ethical standards, and take full advantage of AI’s benefits. By doing so, you’ll be well-positioned to thrive, ultimately providing better client service and staying ahead of the curve in an increasingly competitive legal marketplace.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].

 

 

 


New York Surrogates Court on Admissibility of AI Evidence 

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

New York Surrogates Court on Admissibility of AI Evidence 

The last few decades have seen rapid technological advances. For busy lawyers, keeping up with the pace of change has been a challenging endeavor. For many, the inclination has been to ignore the latest advancements in favor of maintaining the status quo.

Unfortunately, that approach has proven ineffective. 21st-century technologies have infiltrated all aspects of our lives, from how we communicate, make purchases, and obtain information to how we conduct business. Turning a blind eye is no longer an option. Instead, it is necessary to prioritize learning about emerging technologies, including their potential implications for your law practice, your clients’ cases, and your law license. 

This enlightened approach is essential as we enter the artificial intelligence (AI) era. Like the technologies that preceded it, AI will inevitably impact many aspects of your law practice, even if you choose not to incorporate it into your firm’s daily workflows.

For example, just as social media evidence has altered the course of trials, so too has artificial intelligence. A case in point is Saratoga County Surrogate’s Court Judge Schopf's October 10th Court Order in Matter of Weber (2024 NY Slip Op 24258). One issue under consideration in this case was the use of generative AI-produced evidence at a hearing.

In Matter of Weber, the Petitioner filed a Petition for Judicial Settlement of the Interim Account of the Trustee. The Objectant responded by filing objections to the Trust Account alleging, in relevant part, that the Petitioner had breached her fiduciary duty as Trustee. A hearing was held to address the Objectant’s allegations. 

This opinion followed. In it, the Court considered whether the Objectant had overcome the prima facie accuracy of the Interim Account and proved his objections. One issue addressed was whether and under what circumstances AI-generated output is admissible into evidence.

The hearing testimony revealed that the Objectant's expert witness, Charles Ranson, used Microsoft’s generative AI tool, Copilot, to cross-check his damage calculations. The evidence showed that Ranson could not provide the input or prompt used, nor could he advise regarding the sources relied upon and the process used by the chatbot to create the output provided. 

When determining the admissibility of Copilot’s responses, Judge Schopf explained that the “mere fact that artificial intelligence has played a role, which continues to expand in our everyday lives, does not make the results generated by artificial intelligence admissible in Court.”

The Court concluded that the reliability of AI-generated responses must be established before they are admitted into evidence. The Court explained that “due to the nature of the rapid evolution of artificial intelligence and its inherent reliability issues that prior to evidence being introduced which has been generated by an artificial intelligence product or system, counsel has an affirmative duty to disclose the use of artificial intelligence and the evidence sought to be admitted should properly be subject to a Frye hearing prior to its admission, the scope of which should be determined by the Court, either in a pre-trial hearing or at the time the evidence is offered.”

According to Judge Schopf, the Objectant failed to meet that burden: “In the instant case, the record is devoid of any evidence as to the reliability of Microsoft Copilot in general, let alone as it relates to how it was applied here. Without more, the Court cannot blindly accept as accurate, calculations which are performed by artificial intelligence.”This decision evinces the growing need to carefully scrutinize AI-generated evidence in legal proceedings. Courts are unlikely to admit this type of evidence at this early stage unless its reliability is established beforehand. As this technology becomes commonplace, these standards may evolve and become more elastic. Only time will tell. 

In the interim, take steps to proactively learn about AI tools so that you can advocate for or challenge their use in court effectively. By staying informed, you will be well-positioned to meet both the opportunities and challenges posed by AI-driven evidence. There’s no better time than now to get up to speed. Start learning about generative AI today to ensure that you are prepared for the future of law.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


Florida’s Professional Conduct Rules Will Include AI—But Was It Needed?

 

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

Florida’s Professional Conduct Rules Will Include AI—But Was It Needed?

When faced with the impact of a potentially disruptive technology, our profession follows a very predictable path: ignorance, indifference, overreaction, readjustment, begrudging acceptance, and finally, appreciation. With the sudden emergence and advancement of generative artificial intelligence (AI), the cycle has started, and all signs point to deep entrenchment in the overreaction phase.

OpenAI released ChatGPT 3.5 nearly two years ago, in November 2022. Since then, AI's effects have been inescapable, rapid, and significant, with unavoidable cascading changes occurring. Headlines about lawyers relying on generative AI tools and submitting briefs to courts that include fake case citations have only amplified already heightened and overblown concerns about AI.

These reactions are unsurprising given AI's wide-ranging potential to revamp core legal functions, from legal research and document drafting to litigation analytics and contract review. In the face of inevitable change, our profession is now focused on whether AI is a tool that will enhance their practices or a force that could undermine or even replace the practice of law as we know it.

In response to these concerns, several jurisdictions across the United States have formed AI committees, issued guidance, or authored opinions to help lawyers navigate a strange new world where AI runs rampant. More than eight states, including Florida, California, and Michigan, have taken formal steps to address AI’s role in legal practice. 

While these efforts are welcome to the extent that they help to encourage adoption, they are arguably unnecessary. Current rules and guidance on technology usage are more than sufficient.

The most recent efforts arose in Florida, where the Bar took the extreme step of modifying the Rules Regulating The Florida Bar to include references to generative AI. On August 29, the Florida Supreme Court adopted the amendments proposed by the Bar. These changes will go into effect on October 28.

One update was to the comment regarding the competency requirement. It now advises that lawyers must stay on top of technology changes, “including generative artificial intelligence.” 

Additionally, the duty of confidentiality now includes the obligation to “be aware that generative artificial intelligence may create risks to the lawyer’s duty of confidentiality.” 

Similarly, the duty of supervision now requires that supervising attorneys must “consider safeguards for the firm’s use of technologies such as generative artificial intelligence, and ensure that inexperienced lawyers are properly supervised.”

Finally, Rule 4-5.3, which addresses lawyers’ responsibilities regarding nonlawyer assistants, now requires that a “lawyer should also consider safeguards when assistants use technologies such as generative artificial intelligence.”

These amendments were unnecessary and unwise and will not withstand the tests of time. AI is simply a new tool. Other technologies preceded it, and new ones will follow. It is not the be-all and end-all of technology or our profession, and trying to ban or reduce its use by lawyers is a pointless, ineffectual endeavor that fails to serve the needs of our profession in the AI era.

The reactions by state bars to AI are entirely predictable. No matter the technology, our profession has tried to regulate it, from email and the Internet to social media and cloud computing. 

Demonizing new technology and banning its use have been par for the course. Eventually, however, a resigned acceptance set in as each tool became commonplace. AI will be no different. 

This same cycle is occurring with AI. Soon, we’ll move on from overreaction to the later phases, ultimately landing on appreciation. This process will happen much faster than it has in the past due to AI’s rapid rate of advancement. So buckle up, shore up your AI knowledge, and hold on, folks! The times are a-changin’ and quickly. Catch up while you still can!

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].

 


ILTA's 2024 Tech Survey: AI, Cloud, and the Tools Driving Law Firm Efficiency

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

ILTA's 2024 Tech Survey: AI, Cloud, and the Tools Driving Law Firm Efficiency

The 2024 International Legal Technology Association (ILTA) Technology Survey was recently released, and it provides a wealth of information on technology adoption trends in law firms. Not surprisingly, this year it includes data on how lawyers are implementing generative artificial intelligence (AI) into their firms. However, many other types of technology issues are addressed as well.

The report reveals how AI is currently being used in firms and provides data on plans for future investment in AI and other technologies. Areas addressed include cloud-based tools, software to streamline law firm operations, and technologies adopted to support remote work.

First, let’s take a look at the AI data. The survey results show that AI adoption has increased over the past year, with 37% of firms now using it compared to only 15% in 2023. The data also shows that larger law firms are leading the way, with 74% of firms of 700+ lawyers using AI tools in 2024. In comparison, only 20% of firms with fewer than 50 lawyers are exploring AI, followed by firms with 50-149 lawyers (27%), firms with 150-359 (36%), and firms with 350-699 (65%). 

Overall, firms are using AI to address various business needs. The top functions supported by AI include billing and accounting (31%), professional development (27%), litigation support (42%), and marketing and business development (49%). 

Lawyers are also relying on AI's benefits to increase efficiency in their daily workflows. The results show that research is expected to be the top use of generative AI in the next year, according to 73% of the respondents. Other popular use cases include summarizing complex documents (70% up from 48% in 2023), creating initial drafts of documents (69% up from 61% up from 55% in 2023), writing presentations (61% up from 55% in 2023) and drafting alerts or email notifications (50% up from 43% in 2023). 

Compared to last year, fewer legal professionals plan to use AI for creative tasks such as brainstorming ideas (46% down from 43%), writing/troubleshooting code (33% down from 36%), and generating strategic ideas (27% down from 28%). Despite those declines, the bulk of the data shows that there continues to be significant interest in the potential of generative AI and its promise of improving productivity firmwide.

The survey also explored remote collaboration technology adoption, seeking insight into the tools used most often in firms. The data showed that most firms now use video conferencing tools like Zoom and WebEx, with 94% of respondents reporting the availability of these platforms in their firms. Email (91%) and chat tools like Teams and Jabber (84%) are also widely used. Document-sharing functionality is gaining traction as well with 44% relying on these tools, reflecting the continued shift to digital workspaces.

In keeping with the shift to online collaboration, firms are also increasingly moving to a “paper-less” approach. Only 13% are not considering a shift to using digital documents. 49% report that their firms are “paper-lite,” 20% have paperless projects underway, 8% of firms are working on a paperless strategy, and 10% are fully paperless.

Finally, one of the most notable data points reflects this digital-first trend: the rapid rise of cloud-based tools post-pandemic. When it comes to cloud use, 43% of firms say they are "mostly in the cloud," while another 42% opt for a "cloud with every upgrade" approach. Only 2% of respondents indicated that they are “not yet comfortable with the cloud” down from 7% in 2021.

Overall, these findings from the 2024 ILTA Technology Survey highlight the legal industry's ongoing shift toward digital-first practices, with AI playing a key role in that transition. Firms are increasingly relying on AI, cloud-based tools, and remote collaboration technologies to streamline operations and support flexible work environments.

How does your firm compare? In today’s competitive legal marketplace, what steps are you taking to implement AI, cloud solutions, and digital collaboration tools into your firm’s IT stack to streamline efficiency, improve workflows, and provide better client service?

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


PayPal Fee Trust Account Mishap Results in NJ Disciplinary Reprimand 

PayPal Fee Trust Account Mishap Results in NJ Disciplinary Reprimand 

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

At the turn of the century, it was uncommon for lawyers to accept credit cards. Today, however, most law firms take advantage of the flexibility and convenience of online payments. According to the 2024 MyCase and LawPay Legal Industry Report, 78% of law firms now accept online payments via credit or debit card. Nearly half of those firms (44%) report that they collect $3,000 or more per month as a result. 

With the rise of online payments in the legal field and beyond, there has been a corresponding increase in the number of tools that enable law firms to accept credit card payments. Sifting through the many options available requires an analysis of features, processing fees, and ethical compliance, among other things. Identifying the right tool for your firm’s needs can be a challenging task, but many experts recommend choosing a payment platform designed for the needs of lawyers in order to avoid violating the many ethical requirements surrounding lawyer trust accounts.

Case in point: a recent case where a New Jersey lawyer was reprimanded due to an $18.90 overdraft of the firm’s trust account. The reprimand was recommended by the Disciplinary Review Board and imposed by the New Jersey Supreme Court on September 4th in The Matter of Michael A. Gorokhovich (D-70 September Term 2023 089080.

The reprimand stemmed from an overdraft caused by a $19.99 PayPal Business charge to his trust account. The record showed that two earlier debits were similar in nature, but those had not overdrawn the trust account and went unnoticed. The lawyer claimed he had not authorized the debits and closed his PayPal account to prevent future issues. 

The attorney was reprimanded for “failing to maintain receipts and disbursements journals; conduct monthly, three-way ATA reconciliations; maintain individual client trust ledgers; maintain a running cash balance in his ATA checkbook; and retain ABA and ATA records for seven years. Moreover, because of his inept recordkeeping practices, he failed to notice, let alone put a stop to, allegedly unauthorized electronic charges to his ATA until after the third such charge caused an overdraft in that account.” 

As a result of the reprimand, he was required to “(1) complete a recordkeeping course preapproved by the Office of Attorney Ethics within sixty days of this order; (2) submit proof to the Office of Attorney Ethics, within sixty days of this order, that the recordkeeping deficiencies identified during the audit have been corrected; and (3) provide to the Office of Attorney Ethics monthly reconciliations of respondent’s attorney accounts, on a quarterly basis, for two years…”

Using payment processing software specifically designed for lawyers could have prevented this type of mishap. Legal-specific payment platforms are designed with the complexities of law firm billing and trust accounting in mind and typically include built-in safeguards tailored to meet the ethical requirements surrounding attorney trust accounts. 

These tools automatically separate earned fees from unearned funds, protecting trust accounts from unauthorized and unethical debits. Similarly, these platforms prevent credit card processing and other fees from being withdrawn directly from attorney trust accounts, avoiding unauthorized debits and potential overdrafts similar to the one that triggered the reprimand in this case.

Another benefit of legal payment platforms is that they can include features that simplify compliance with trust accounting rules. Detailed transaction reports can be run that enable easy tracking of client funds and facilitate the three-way reconciliations required by most state bar associations. 

Payment processing tools built for legal professionals are designed to ensure the proper maintenance of accounting records. If your firm isn’t yet using a legal payment processing platform, the New Jersey disciplinary case clearly shows why now is the time to make that change. Legal-specific platforms help protect client funds, ensure trust account compliance, and prevent the ethical pitfalls that can arise with general payment processors like PayPal.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].

 

 


Technology Competence in the Age of Artificial Intelligence

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

Technology Competence in the Age of Artificial Intelligence

With technology evolving so quickly, powered by the rapid development of generative artificial intelligence (AI)  tools, keeping pace with change becomes all the more critical. For lawyers, the ethical requirement of maintaining technology competence plays a large part in that endeavor. 

The duty of technology competence is relatively broad, and the obligations required by this ethical rule can sometimes be unclear, especially when applied to emerging technologies like AI. Rule 1.1 states that a “lawyer should provide competent representation to a client.” The comments to this rule clarify that to “maintain the requisite knowledge and skill, a lawyer should . . . keep abreast of the benefits and risks associated with technology the lawyer uses to provide services to clients or to store or transmit confidential information.”

With the proliferation of AI, this duty has become all the more relevant, especially as trusted legal software companies begin to incorporate this technology into the platforms that legal professionals use daily in their firms. Lawyers seeking to take advantage of the significant workflow efficiencies that AI offers must ensure that they’re doing so ethically. 

That’s easier said than done. In today’s fast-paced environment, what is required to meet that duty? Does it simply require that you understand the concept of AI? Do you have to understand how AI tools work? Is there a continuing obligation to track changes in AI as it advances? If you have no plans to use it, can you ignore it and avoid learning about it? 

Fortunately for New York lawyers, there are now two sets of ethics guidance available: the New York State Bar’s April 2024 Report and Recommendations from the Taskforce on Artificial Intelligence ,and more recently, Formal Opinion 2024-5, which was issued by the New York City Bar Association.

The New State Bar’s guidance on AI is overarching and general, particularly regarding technology competence. As the “AI and Generative AI Guidelines” provided in the Report explains, lawyers “have a duty to understand the benefits, risks and ethical implications associated with the Tools, including their use for communication, advertising, research, legal writing and investigation.”

While instructive, the advice is fairly general, and intentionally so. As the Committee explained, AI is no different than the technology that preceded it, and thus, “(m)any of the risks posed by AI are more sophisticated versions of problems that already exist and are already addressed by court rules, professional conduct rules and other law and regulations.” 

For lawyers seeking more concrete guidance on technology competence when adopting AI, look no further than the New York City Bar’s AI opinion. In it, the Ethics Committee offers significantly more granular insight into technology competence obligations.

First, lawyers must understand that current generative AI tools may include outdated information “that is false, inaccurate, or biased.” The Committee requires that lawyers understand not only what AI is but also how it works. 

Before choosing a tool, there are several recommended courses of action. First, you must “understand to a reasonable degree how the technology works, its limitations, and the applicable [T]erms of [U]se and other policies governing the use and exploitation of client data by the product.” Additionally, you may want to learn about AI by “acquiring skills through a continuing legal education course.” Finally, consider consulting with IT professionals or cybersecurity experts.” 

The Committee emphasized the importance of carefully reviewing all responses for accuracy explaining that generative AI outputs “may be used as a starting point but must be carefully scrutinized. They should be critically analyzed for accuracy and bias.” The duty of competence requires that lawyers ensure the original input is correct and that they must analyze the corresponding response “to ensure the content accurately reflects and supports the interests and priorities of the client in the matter at hand, including as part of advocacy for the client.”

The Committee further clarified that you cannot delegate your professional judgment to AI and that you “should take steps to avoid overreliance on Generative AI to such a degree that it hinders critical attorney analysis fostered by traditional research and writing.” This means that all AI output should be supplemented “with human-performed research and supplement any Generative AI-generated argument with critical, human-performed analysis and review of authorities.”

If you plan to dive into generative AI, both sets of guidance should provide a solid roadmap to help you navigate your technology competence duties. Understanding how AI tools function and their limitations are essential when using this technology. By staying informed and applying critical judgment to the results, you can ethically leverage AI’s many benefits to provide your clients with the most effective, efficient representation possible.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].



Beyond Simple Tools: vLex's Vincent AI and the Future of Trusted Legal AI Platforms

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

Beyond Simple Tools: vLex's Vincent AI and the Future of Trusted Legal AI Platforms

There has been a noticeable shift in the way that legal technology companies are approaching generative artificial intelligence (AI) product development. Last year, several general legal assistant chatbots were released, mimicking the functionality of ChatGPT. Next came the emergence of single-point solutions, often developed by start-ups, to address specific workflow challenges such as drafting litigation documents, analyzing contracts, and legal research. 

As we approach the final quarter of 2024, established legal technology providers are more deeply integrating generative AI into comprehensive platforms, streamlining the user interface of legal research, practice management, and document management tools. Rather than standalone tools, generative AI is becoming a core feature of legal platforms, enabling users to access all their data and software seamlessly in one trusted environment.

A notable example is vLex, which acquired Fastcase last year and announced major updates to its Vincent AI product this week. Ed Walters, Chief Strategy Officer, described the update as the transformation from an AI-powered legal research and litigation drafting tool into a full-fledged legal AI platform.

This release expands workflows for transactional, litigation, and contract matters, enabling users to 1) analyze contracts, depositions, and complaints, 2) perform redline analysis and document comparisons, 3) upload documents to find related authorities, 4) generate research memoranda, 5) compare laws across jurisdictions, and 6) explore document collections to extract facts, create timelines, and more.

Similarly, both legal research companies Lexis Nexis and Thomson Reuters rolled out revamped versions of their generative AI assistants last month, reinforcing the trend toward AI-driven platforms. Lexis Nexis introduced Protégé, an AI assistant designed to meet each user’s specific needs and workflows, and serves as the gateway to a suite of Lexis Nexis products. Similarly, Thomson Reuters unveiled CoCounsel 2.0, an enhanced version of its AI assistant that was originally launched last year. Built on technology from its acquisition of Casetext’s CoCounsel, this upgraded legal assistant acts as the central interface for accessing many Thomson Reuters tools and resources, streamlining workflows across its products.

Despite the platform trend, single-point AI solutions remain valuable, especially for solo or small firms looking to streamline specific tasks like document analysis, drafting pleadings, or preparing discovery responses. These standalone tools continue to be developed and offer significant value for firms not already invested in a software ecosystem with integrated AI. If you’re in the market for an AI tool that accomplishes only one task, there’s most likely an AI tool available that fits the bill.

However, for many firms, AI integration into the software platforms they already use will likely be the most practical path forward. This approach helps to bridge the implementation gap and addresses common concerns about trust, which are often barriers to AI adoption. By partnering with trusted legal technology providers, firms can more comfortably adopt AI by leveraging the security and reliability of the platforms already in place.

With the deeper integration of AI into comprehensive legal platforms, the adoption process will become smoother, allowing legal professionals to enjoy the benefits of the reduced friction and tedium resulting from more streamlined law firm processes. This shift will allow legal professionals to focus on more meaningful work, improving both the practice of law and client service. 

Whether a standalone product or built into legal software platforms, generative AI offers significant potential, some of which is already being realized. It’s more than just another tool—it may very well redefine how law firms operate, paving the way for a more efficient and effective future.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


Legal Ethics in the AI Era: The NYC Bar Weighs In

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

Legal Ethics in the AI Era: The NYC Bar Weighs In

Since November 2022, when the release of ChatGPT was first announced, many jurisdictions have released AI guidance. In this column, I’ve covered the advice rendered by many state ethics committees, including California, Florida, New Jersey, Michigan, New York, Pennsylvania, Kentucky, the American Bar Association, and most recently, Virginia.

Now, the New York City Bar Association has entered the ring, issuing Formal Ethics Opinion 2024-5 on August 7th. The New York City Bar Association Committee of Professional Ethics mirrored the California Bar’s approach and provided general guidelines in a chart format rather than proscriptive requirements. The Committee explained that “when addressing developing areas, lawyers need guardrails and not hard-and-fast restrictions or new rules that could stymie developments.” Instead, the goal was to provide assistance to New York attorneys through “advice specifically based on New York Rules and practice…”

Regarding confidentiality, the Committee distinguished between “closed systems” consisting of a firm’s “own protected databases,” like those typically provided by legal technology companies, and systems like ChatGPT that share inputted information with third parties or use it for their own purposes. Client consent is required for the latter, and even with “closed systems,” confidentiality protections within the firm must be maintained. The Committee cautioned that the terms of use for a generative AI tool should be reviewed regularly to ensure that the technology vendor is not using inputted information to train or improve its product in the absence of informed client consent.

Turning to the duty of technology competence, the Committee opined that when choosing a product, lawyers “should understand to a reasonable degree how the technology works, its limitations, and the applicable [T]erms of [U]se and other policies governing the use and exploitation of client data by the product.” Also emphasized was the need to avoid delegating professional judgment to these tools and to consider generative AI outputs to be a starting point. Not only must lawyers ensure that the output is accurate, but they should also take steps to “ensure the content accurately reflects and supports the interests and priorities of the client in the matter at hand.”

The duty of supervision was likewise addressed, with the Committee confirming that firms should have policies and training in place for lawyers and other employees in the firm regarding the permissible use of this technology, including ethical and practical uses, along with potential pitfalls. The Committee also advised that any client intake chatbots used by lawyers on their websites or elsewhere on behalf of the firm should be adequately supervised to avoid “the risk that a prospective client relationship or a lawyer-client relationship could be created.”

Not surprisingly, the Committee required lawyers to be aware of and comply with any court orders regarding AI use. Another court-related issue addressed was AI-created deepfakes and their impact on the judicial process. According to the Committee’s guidance, lawyers must screen all client-submitted evidence to assess whether it was generated by AI, and if there is a suspicion “that a client may have provided the lawyer with Generative AI-generated evidence, a lawyer may have a duty to inquire.”

Finally, the Committee turned to billing issues, agreeing with other jurisdictions that lawyers may charge for time spent crafting inquiries and reviewing output. Additionally, the Committee explained that firms may not bill clients for time saved as a result of AI usage and that firms may want to explore alternative fee arrangements in order to stay competitive since AI may significantly impact legal pricing moving forward. Last but not least, any generative AI costs should be disclosed to clients, and any costs charged to clients “​​should be consistent with ethical guidance on disbursements and should comply with applicable law.” 

The summary above simply provides an overview of the guidance provided. For a more nuanced perspective, you should read the opinion in its entirety. Whether you’re a New York lawyer or practice elsewhere, this guidance is worth reviewing and provides a helpful roadmap for adoption as we head into an AI-led future where technology competence is no longer an option. Instead, it is an essential requirement for the effective and responsible practice of law.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


Practical and Adaptable AI Guidance Arrives From the Virginia State Bar 

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

Practical and Adaptable AI Guidance Arrives From the Virginia State Bar 

If you're concerned about the ethical issues surrounding artificial intelligence (AI) tools, the good news is that there's no shortage of guidance. A wealth of resources, guidelines, and recommendations are now available to help you navigate these concerns. 

Traditionally, bar associations have taken years to analyze the ethical implications of new and emerging technologies. However, generative AI has reversed this trend. Ethics guidance has emerged far more quickly, which is a very welcome change from the norm.

Since the general release of the first version of ChatGPT in November 2022, ethics committees have stepped up to the plate and offered much-needed AI guidance to lawyers at a remarkably rapid clip. Jurisdictions that have weighed in include California, Florida, New Jersey, Michigan, New York, Pennsylvania, Kentucky, and the American Bar Association. 

Recently, Virginia entered the AI ethics discussion with a notably concise approach. Unlike the often lengthy and detailed analyses from other jurisdictions, the Virginia State Bar issued a streamlined set of guidelines, available as an update on its website (accessible online at the bottom of the page: https://vsb.org/Site/Site/lawyers/ethics.aspx). This approach stands out not only for its brevity but also for its focus on providing practical, overarching advice. By avoiding the intricacies of specific AI tools or interfaces, the Virginia State Bar has ensured that its guidance remains flexible and relevant, even as the technology rapidly evolves.

Importantly, the Bar acknowledged that regardless of the type of technology at issue, lawyers’ ethical obligations remain the same: “(A) lawyer’s basic ethical responsibilities have not changed, and many ethics issues involving generative AI are fundamentally similar to issues lawyers face when working with other technology or other people (both lawyers and nonlawyers).”

Next, the Bar examined confidentiality obligations, opining that just as lawyers must review data-handling policies relating to other types of technology, so, too, must they vet the methods used by AI providers when handling confidential client information. The Bar explained that while legal-specific providers can often promise better data security, there is still an obligation to ensure a full understanding of their data management approach: “Legal-specific products or internally-developed products that are not used or accessed by anyone outside of the firm may provide protection for confidential information, but lawyers must make reasonable efforts to assess that security and evaluate whether and under what circumstances confidential information will be protected from disclosure to third parties.”

One area where the Bar’s approach conformed to that of most jurisdictions was client consent. While the ABA suggested explicit client consent when using AI was required in many cases, the Bar agreed with most other ethics committees, concluding that there “is no per se requirement to inform a client about the use of generative AI in their matter” unless there are extenuating circumstances like an agreement with the client or increased risks like those encountered when using consumer-facing products.

The Bar also considered supervisory requirements, emphasizing the importance of reviewing all output just as you would regardless of the source. According to the Bar, as “with any legal research or drafting done by software or by a nonlawyer assistant, a lawyer has a duty to review the work done and verify that any citations are accurate (and real)” and that that duty of supervision “extends to generative AI use by others in a law firm.”

Next, the Bar provided insight into the impact of AI usage on legal fees. The Bar agreed that lawyers cannot charge clients for the time saved as a result of using AI: “A lawyer may not charge an hourly fee in excess of the time actually spent on the case and may not bill for time saved by using generative AI. The lawyer may bill for actual time spent using generative AI in a client’s matter or may wish to consider alternative fee arrangements to account for the value generated by the use of generative AI.”

On the issue of passing the costs of AI software on to clients, the Bar concluded that doing so was not permissible unless the fee is both reasonable and “permitted by the fee agreement.”

Finally, the bar focused on a handful of recent court rules issued that forbid the use of AI for document preparation, highlighting the importance of being aware of and complying with all court disclosure requirements regarding AI usage.

The Virginia State Bar’s flexible and practical AI ethics guidance offers a valuable framework for lawyers as they adjust to the ever-changing generative AI landscape. By focusing on overarching principles, this thoughtful approach ensures adaptability as technology evolves. For those seeking reliable guidance, Virginia’s model offers a useful roadmap for remaining ethically grounded amid unprecedented technological advancements.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


The ABA Weighs in on the Ethical Use Of AI

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

The ABA Weighs in on the Ethical Use Of AI

Generative artificial intelligence (GenAI) is advancing at exponential rates. Since the release of GPT-4 less than two years ago, there has been an explosion of GenAI tools designed for legal professionals. With the rapid proliferation of software incorporating this technology comes increased concerns about ethical and secure implementation. 

Ethics committees across the country have stepped up to the plate to offer guidance to assist lawyers seeking to adopt GenAI into their firms. Most recently, the American Bar Association weighed in, handing down Formal Opinion 512 at the end of July. 

In its opinion, the ABA Standing Committee on Ethics and Professional Responsibility acknowledged the significant productivity gains that GenAI can offer legal professionals, explaining that GenAI “tools offer lawyers the potential to increase the efficiency and quality of their legal services to clients…Lawyers must recognize inherent risks, however." 

Importantly, the Committee also cautioned that when using these tools, lawyers “lawyers may not abdicate their responsibilities by relying solely on a GAI tool to perform tasks that call for the exercise of professional judgment.” In other words, while GenAI can significantly increase efficiencies, lawyers should not rely on it at the expense of their personal judgment.

Next, the Committee addressed the key ethical issues presented when lawyers incorporate GenAI tools into their workflows. First and foremost, technology competency was emphasized. According to the Committee, lawyers must stay updated on the evolving nature of GenAI technologies and have a reasonable understanding of the technology’s benefits, risks, and limitations.

Confidentiality obligations were also discussed, and the Committee highlighted the need to ensure that GenAI does not inadvertently expose client data and that systems should not be allowed to train on confidential data. Notably, the Committee required lawyers to obtain informed client consent before using these tools in ways that could impact client confidentiality, especially when using consumer-facing tools that train on inputted data.

The Committee also provided guidance on supervision requirements, advising that lawyers in managerial roles must ensure compliance with their firms’ established GenAI policies. The supervisory duty includes implementing policies, training personnel, and supervising the use of AI to prevent ethical violations.

The Committee highlighted the importance of reviewing all GenAI output to ensure its accuracy: “(D)uties to the tribunal likewise require lawyers, before submitting materials to a court, to review these outputs, including analysis and citations to authority, and to correct errors, including misstatements of law and fact, a failure to include controlling legal authority, and misleading arguments.”

Finally, the Committee offered insight into the ethics of legal fees charged when using GenAI to address client matters. The Committee explained that lawyers may charge fees encompassing the time spent reviewing AI-generated outputs but may not charge clients for time spent learning to use GenAI software. Importantly, it is impermissible for lawyers to invoice clients for time that would have been spent on work but for the efficiencies gained from using GenAI tools. In other words, clients can only be billed for the work completed, not for time saved due to GenAI.

Each new ethics opinion, like ABA Formal Opinion 512, offers much-needed guidance that enables lawyers to integrate AI tools into their firms thoughtfully and responsibly. By addressing emerging concerns and providing clear standards, these opinions reduce uncertainty and pave the way for forward-thinking lawyers to adopt GenAI confidently. While the ABA’s opinion is only advisory, it represents a positive trend of responsive guidance that arms the legal profession with the information needed to innovate ethically and adopt emerging technologies in today’s ever-changing AI era.

Nicole Black is a Rochester, New York attorney, author, journalist, and the Principal Legal Insights Strategist at MyCase, LawPay, CASEpeer, and Docketwise, AffiniPay companies. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


AI’s Role in Modern Law Practice Explored by Texas and Minnesota Bars

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

AI’s Role in Modern Law Practice Explored by Texas and Minnesota Bars

If you’re not yet convinced that artificial intelligence (AI) will change the practice of law, then you’re not paying attention. If nothing else, the sheer number of state bar ethics opinions and reports focused on AI released within the past two years should be a clear indication that AI’s effects on our profession will be profound.

Just this month, the Texas and Minnesota bar associations stepped into the fray, each issuing reports that studied the issues presented when legal professionals use AI. 

First, there was the Texas Taskforce for Responsible AI in the Law’s “Interim Report to the State Bar of Texas Board of Directors,” which addressed the benefits and risks of AI, along with recommendations for the ethical adoption of these tools.

The Minnesota State Bar Association (MSBA) Assembly’s report, “Implications of Large Language Models (LLMs) on the Unauthorized Practice of Law (UPL) and Access to Justice,” assessed broader issues related to how AI could potentially impact the provision of legal services within our communities. 

Despite the divergence in focus, the reports covered a significant overlap of topics. For example, both reports emphasized the ethical use of AI and the importance of ensuring AI increases rather than reduces access to justice.

However, approaches to both issues differed. While the Texas Taskforce sought to develop guidelines for ethical AI use, the MSBA report suggested that there was no need to reinvent the wheel and that existing ethical guidance issued by other jurisdictions about AI tools like LLMs was likely sufficient to assist Minnesota legal professionals in navigating AI adoption.

There was also a joint focus on access to justice. Both reports included an emphasis on the value of ensuring that AI tools enhance access to justice. The Texas Taskforce highlighted the need to support legal aid providers in obtaining access to AI. At the same time, the MSBA’s Assembly recommended the creation of an “Access to Justice Legal Sandbox” that “would provide a controlled environment for organizations to use LLMs in innovative ways, without the fear of UPL prosecution.”

Overall, the MSBA Assembly’s approach was more exploratory, while the Texas Taskforce’s was more advisory. The MSBA Assembly’s report included recommendations to take more detailed, actionable steps like creating an AI regulatory sandbox, launching pilot projects, and creating a Standing Committee to consider recommendations made in the report.  In comparison, the Texas Taskforce identified broader goals such as raising awareness of cybersecurity issues surrounding AI, emphasizing the importance of AI education and CLEs, and proposing AI implementation best practices.

The issuance of these reports on the tails of other bar association guidance represents a significant step forward for the legal profession. While we’ve historically resisted change, we’re now looking forward rather than backward. Bar associations are rising to the challenge during this period of rapid technological advancements and providing lawyers with much-needed, practical guidance and advice designed to help them navigate the ever-changing AI landscape.  

While Texas focuses on comprehensive guidelines and educational initiatives, Minnesota’s approach includes regulatory sandboxes and pilot projects. These differing strategies reflect a shared commitment to ensuring AI enhances access to justice and improves the lives of legal professionals. Together, these efforts indicate a profession that is, at long last, willing to adapt and innovate by leveraging emerging technologies to better serve society and uphold justice in an increasingly digital-first age.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase legal practice management software and LawPay payment processing, AffiniPay companies. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].

 


Pre-Trial AI Tools For Lawyers

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

Pre-Trial AI Tools For Lawyers

I often receive emails from lawyers who reach out after having read one of my articles about generative artificial intelligence (AI) tools. They frequently seek advice about implementing AI software in their firm. Some focus on ethics and accuracy concerns, while others ask for my input on which tools to use to address a workflow issue in their firm. These communications can sometimes inspire me to write articles about a specific type of AI software since it’s a safe bet that other lawyers may be struggling with the same issue.

Recently, many emails have focused on pre-trial AI tools for lawyers. This makes sense since AI tools can streamline many repeatable and tedious tasks involved during the discovery and motion stages of a case. 

Of course, both legalese and litigation processes are complex, which means that consumer-focused generative AI tools such as ChatGPT or Claude are often inadequate, producing less-than-ideal output. Fortunately, legal technology companies that thoroughly understand legal workflows and lawyers’ unique needs are much better positioned to develop tools that streamline pre-trial workflows and generate reliable and useful content.

Because AI can address many of the pain points encountered during the early stages of litigation, it’s no surprise that AI software tools have been released over the past year that can address many pre-litigation workflow challenges. 

Now that those products are available let’s review some of the top categories. Note that I have not tested out most of these tools and only provide information regarding available software. You must carefully vet the providers and take advantage of free trials and demos offered before settling on a tool. To assist with the vetting process, you’ll find a list of suggested questions to ask legal cloud and AI providers here. 

The first category of AI tools we’ll consider is pre-trial discovery management. This software automates the tedious and redundant process of preparing routine pleadings, discovery requests, and discovery responses. Upon uploading complaints and other legal documents, this software will typically generate responsive pleadings, such as answers, interrogatories, requests for admission, and document requests and responses. The following AI software can assist in drafting these types of documents: Legalmation, Ai.law, Briefpoint, and LexthinkAI. 

Next are AI tools for deposition summary and analysis. This software leverages AI algorithms to reduce the time spent reviewing and obtaining insights from deposition transcripts. Automating these tasks significantly streamlines the review process, allowing for more efficient case preparation and strategy development. Tools that offer this functionality include Legalmation, Lexthink.ai, Casemark, Lexis + AI, and  CoCounsel, a Thomson Reuters company.

Finally, there are AI tools that assist with brief drafting and analysis. AI technology is beneficial in this context since it can help edit text, improve writing, and change tone, reducing the time needed to write complex legal documents. These tools typically function within word processing tools such as Word. Products that assist with brief writing functions include Clearbrief, Briefcatch, EZBriefs, and Wordrake.

Pre-trial AI tools are more than document robots; they're powerful allies that reduce friction and enhance efficiency. Even if you’re not yet ready to invest in these tools at this early stage, it’s worthwhile to arm yourself with information about this category of software for future reference. They will undoubtedly increase in sophistication over time and have great potential. With these tools on your side, either now or down the road, you’ll be able to focus on crafting winning arguments while AI tackles tedious pre-trial tasks. The result? Less stress, happier lawyers and clients, and a future-proofed legal practice.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase legal practice management software and LawPaypayment processing, AffiniPay companies. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].