Technology Competence in the Age of Artificial Intelligence

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

Technology Competence in the Age of Artificial Intelligence

With technology evolving so quickly, powered by the rapid development of generative artificial intelligence (AI)  tools, keeping pace with change becomes all the more critical. For lawyers, the ethical requirement of maintaining technology competence plays a large part in that endeavor. 

The duty of technology competence is relatively broad, and the obligations required by this ethical rule can sometimes be unclear, especially when applied to emerging technologies like AI. Rule 1.1 states that a “lawyer should provide competent representation to a client.” The comments to this rule clarify that to “maintain the requisite knowledge and skill, a lawyer should . . . keep abreast of the benefits and risks associated with technology the lawyer uses to provide services to clients or to store or transmit confidential information.”

With the proliferation of AI, this duty has become all the more relevant, especially as trusted legal software companies begin to incorporate this technology into the platforms that legal professionals use daily in their firms. Lawyers seeking to take advantage of the significant workflow efficiencies that AI offers must ensure that they’re doing so ethically. 

That’s easier said than done. In today’s fast-paced environment, what is required to meet that duty? Does it simply require that you understand the concept of AI? Do you have to understand how AI tools work? Is there a continuing obligation to track changes in AI as it advances? If you have no plans to use it, can you ignore it and avoid learning about it? 

Fortunately for New York lawyers, there are now two sets of ethics guidance available: the New York State Bar’s April 2024 Report and Recommendations from the Taskforce on Artificial Intelligence ,and more recently, Formal Opinion 2024-5, which was issued by the New York City Bar Association.

The New State Bar’s guidance on AI is overarching and general, particularly regarding technology competence. As the “AI and Generative AI Guidelines” provided in the Report explains, lawyers “have a duty to understand the benefits, risks and ethical implications associated with the Tools, including their use for communication, advertising, research, legal writing and investigation.”

While instructive, the advice is fairly general, and intentionally so. As the Committee explained, AI is no different than the technology that preceded it, and thus, “(m)any of the risks posed by AI are more sophisticated versions of problems that already exist and are already addressed by court rules, professional conduct rules and other law and regulations.” 

For lawyers seeking more concrete guidance on technology competence when adopting AI, look no further than the New York City Bar’s AI opinion. In it, the Ethics Committee offers significantly more granular insight into technology competence obligations.

First, lawyers must understand that current generative AI tools may include outdated information “that is false, inaccurate, or biased.” The Committee requires that lawyers understand not only what AI is but also how it works. 

Before choosing a tool, there are several recommended courses of action. First, you must “understand to a reasonable degree how the technology works, its limitations, and the applicable [T]erms of [U]se and other policies governing the use and exploitation of client data by the product.” Additionally, you may want to learn about AI by “acquiring skills through a continuing legal education course.” Finally, consider consulting with IT professionals or cybersecurity experts.” 

The Committee emphasized the importance of carefully reviewing all responses for accuracy explaining that generative AI outputs “may be used as a starting point but must be carefully scrutinized. They should be critically analyzed for accuracy and bias.” The duty of competence requires that lawyers ensure the original input is correct and that they must analyze the corresponding response “to ensure the content accurately reflects and supports the interests and priorities of the client in the matter at hand, including as part of advocacy for the client.”

The Committee further clarified that you cannot delegate your professional judgment to AI and that you “should take steps to avoid overreliance on Generative AI to such a degree that it hinders critical attorney analysis fostered by traditional research and writing.” This means that all AI output should be supplemented “with human-performed research and supplement any Generative AI-generated argument with critical, human-performed analysis and review of authorities.”

If you plan to dive into generative AI, both sets of guidance should provide a solid roadmap to help you navigate your technology competence duties. Understanding how AI tools function and their limitations are essential when using this technology. By staying informed and applying critical judgment to the results, you can ethically leverage AI’s many benefits to provide your clients with the most effective, efficient representation possible.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].



Beyond Simple Tools: vLex's Vincent AI and the Future of Trusted Legal AI Platforms

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

Beyond Simple Tools: vLex's Vincent AI and the Future of Trusted Legal AI Platforms

There has been a noticeable shift in the way that legal technology companies are approaching generative artificial intelligence (AI) product development. Last year, several general legal assistant chatbots were released, mimicking the functionality of ChatGPT. Next came the emergence of single-point solutions, often developed by start-ups, to address specific workflow challenges such as drafting litigation documents, analyzing contracts, and legal research. 

As we approach the final quarter of 2024, established legal technology providers are more deeply integrating generative AI into comprehensive platforms, streamlining the user interface of legal research, practice management, and document management tools. Rather than standalone tools, generative AI is becoming a core feature of legal platforms, enabling users to access all their data and software seamlessly in one trusted environment.

A notable example is vLex, which acquired Fastcase last year and announced major updates to its Vincent AI product this week. Ed Walters, Chief Strategy Officer, described the update as the transformation from an AI-powered legal research and litigation drafting tool into a full-fledged legal AI platform.

This release expands workflows for transactional, litigation, and contract matters, enabling users to 1) analyze contracts, depositions, and complaints, 2) perform redline analysis and document comparisons, 3) upload documents to find related authorities, 4) generate research memoranda, 5) compare laws across jurisdictions, and 6) explore document collections to extract facts, create timelines, and more.

Similarly, both legal research companies Lexis Nexis and Thomson Reuters rolled out revamped versions of their generative AI assistants last month, reinforcing the trend toward AI-driven platforms. Lexis Nexis introduced Protégé, an AI assistant designed to meet each user’s specific needs and workflows, and serves as the gateway to a suite of Lexis Nexis products. Similarly, Thomson Reuters unveiled CoCounsel 2.0, an enhanced version of its AI assistant that was originally launched last year. Built on technology from its acquisition of Casetext’s CoCounsel, this upgraded legal assistant acts as the central interface for accessing many Thomson Reuters tools and resources, streamlining workflows across its products.

Despite the platform trend, single-point AI solutions remain valuable, especially for solo or small firms looking to streamline specific tasks like document analysis, drafting pleadings, or preparing discovery responses. These standalone tools continue to be developed and offer significant value for firms not already invested in a software ecosystem with integrated AI. If you’re in the market for an AI tool that accomplishes only one task, there’s most likely an AI tool available that fits the bill.

However, for many firms, AI integration into the software platforms they already use will likely be the most practical path forward. This approach helps to bridge the implementation gap and addresses common concerns about trust, which are often barriers to AI adoption. By partnering with trusted legal technology providers, firms can more comfortably adopt AI by leveraging the security and reliability of the platforms already in place.

With the deeper integration of AI into comprehensive legal platforms, the adoption process will become smoother, allowing legal professionals to enjoy the benefits of the reduced friction and tedium resulting from more streamlined law firm processes. This shift will allow legal professionals to focus on more meaningful work, improving both the practice of law and client service. 

Whether a standalone product or built into legal software platforms, generative AI offers significant potential, some of which is already being realized. It’s more than just another tool—it may very well redefine how law firms operate, paving the way for a more efficient and effective future.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


Legal Ethics in the AI Era: The NYC Bar Weighs In

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

Legal Ethics in the AI Era: The NYC Bar Weighs In

Since November 2022, when the release of ChatGPT was first announced, many jurisdictions have released AI guidance. In this column, I’ve covered the advice rendered by many state ethics committees, including California, Florida, New Jersey, Michigan, New York, Pennsylvania, Kentucky, the American Bar Association, and most recently, Virginia.

Now, the New York City Bar Association has entered the ring, issuing Formal Ethics Opinion 2024-5 on August 7th. The New York City Bar Association Committee of Professional Ethics mirrored the California Bar’s approach and provided general guidelines in a chart format rather than proscriptive requirements. The Committee explained that “when addressing developing areas, lawyers need guardrails and not hard-and-fast restrictions or new rules that could stymie developments.” Instead, the goal was to provide assistance to New York attorneys through “advice specifically based on New York Rules and practice…”

Regarding confidentiality, the Committee distinguished between “closed systems” consisting of a firm’s “own protected databases,” like those typically provided by legal technology companies, and systems like ChatGPT that share inputted information with third parties or use it for their own purposes. Client consent is required for the latter, and even with “closed systems,” confidentiality protections within the firm must be maintained. The Committee cautioned that the terms of use for a generative AI tool should be reviewed regularly to ensure that the technology vendor is not using inputted information to train or improve its product in the absence of informed client consent.

Turning to the duty of technology competence, the Committee opined that when choosing a product, lawyers “should understand to a reasonable degree how the technology works, its limitations, and the applicable [T]erms of [U]se and other policies governing the use and exploitation of client data by the product.” Also emphasized was the need to avoid delegating professional judgment to these tools and to consider generative AI outputs to be a starting point. Not only must lawyers ensure that the output is accurate, but they should also take steps to “ensure the content accurately reflects and supports the interests and priorities of the client in the matter at hand.”

The duty of supervision was likewise addressed, with the Committee confirming that firms should have policies and training in place for lawyers and other employees in the firm regarding the permissible use of this technology, including ethical and practical uses, along with potential pitfalls. The Committee also advised that any client intake chatbots used by lawyers on their websites or elsewhere on behalf of the firm should be adequately supervised to avoid “the risk that a prospective client relationship or a lawyer-client relationship could be created.”

Not surprisingly, the Committee required lawyers to be aware of and comply with any court orders regarding AI use. Another court-related issue addressed was AI-created deepfakes and their impact on the judicial process. According to the Committee’s guidance, lawyers must screen all client-submitted evidence to assess whether it was generated by AI, and if there is a suspicion “that a client may have provided the lawyer with Generative AI-generated evidence, a lawyer may have a duty to inquire.”

Finally, the Committee turned to billing issues, agreeing with other jurisdictions that lawyers may charge for time spent crafting inquiries and reviewing output. Additionally, the Committee explained that firms may not bill clients for time saved as a result of AI usage and that firms may want to explore alternative fee arrangements in order to stay competitive since AI may significantly impact legal pricing moving forward. Last but not least, any generative AI costs should be disclosed to clients, and any costs charged to clients “​​should be consistent with ethical guidance on disbursements and should comply with applicable law.” 

The summary above simply provides an overview of the guidance provided. For a more nuanced perspective, you should read the opinion in its entirety. Whether you’re a New York lawyer or practice elsewhere, this guidance is worth reviewing and provides a helpful roadmap for adoption as we head into an AI-led future where technology competence is no longer an option. Instead, it is an essential requirement for the effective and responsible practice of law.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


Practical and Adaptable AI Guidance Arrives From the Virginia State Bar 

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

Practical and Adaptable AI Guidance Arrives From the Virginia State Bar 

If you're concerned about the ethical issues surrounding artificial intelligence (AI) tools, the good news is that there's no shortage of guidance. A wealth of resources, guidelines, and recommendations are now available to help you navigate these concerns. 

Traditionally, bar associations have taken years to analyze the ethical implications of new and emerging technologies. However, generative AI has reversed this trend. Ethics guidance has emerged far more quickly, which is a very welcome change from the norm.

Since the general release of the first version of ChatGPT in November 2022, ethics committees have stepped up to the plate and offered much-needed AI guidance to lawyers at a remarkably rapid clip. Jurisdictions that have weighed in include California, Florida, New Jersey, Michigan, New York, Pennsylvania, Kentucky, and the American Bar Association. 

Recently, Virginia entered the AI ethics discussion with a notably concise approach. Unlike the often lengthy and detailed analyses from other jurisdictions, the Virginia State Bar issued a streamlined set of guidelines, available as an update on its website (accessible online at the bottom of the page: https://vsb.org/Site/Site/lawyers/ethics.aspx). This approach stands out not only for its brevity but also for its focus on providing practical, overarching advice. By avoiding the intricacies of specific AI tools or interfaces, the Virginia State Bar has ensured that its guidance remains flexible and relevant, even as the technology rapidly evolves.

Importantly, the Bar acknowledged that regardless of the type of technology at issue, lawyers’ ethical obligations remain the same: “(A) lawyer’s basic ethical responsibilities have not changed, and many ethics issues involving generative AI are fundamentally similar to issues lawyers face when working with other technology or other people (both lawyers and nonlawyers).”

Next, the Bar examined confidentiality obligations, opining that just as lawyers must review data-handling policies relating to other types of technology, so, too, must they vet the methods used by AI providers when handling confidential client information. The Bar explained that while legal-specific providers can often promise better data security, there is still an obligation to ensure a full understanding of their data management approach: “Legal-specific products or internally-developed products that are not used or accessed by anyone outside of the firm may provide protection for confidential information, but lawyers must make reasonable efforts to assess that security and evaluate whether and under what circumstances confidential information will be protected from disclosure to third parties.”

One area where the Bar’s approach conformed to that of most jurisdictions was client consent. While the ABA suggested explicit client consent when using AI was required in many cases, the Bar agreed with most other ethics committees, concluding that there “is no per se requirement to inform a client about the use of generative AI in their matter” unless there are extenuating circumstances like an agreement with the client or increased risks like those encountered when using consumer-facing products.

The Bar also considered supervisory requirements, emphasizing the importance of reviewing all output just as you would regardless of the source. According to the Bar, as “with any legal research or drafting done by software or by a nonlawyer assistant, a lawyer has a duty to review the work done and verify that any citations are accurate (and real)” and that that duty of supervision “extends to generative AI use by others in a law firm.”

Next, the Bar provided insight into the impact of AI usage on legal fees. The Bar agreed that lawyers cannot charge clients for the time saved as a result of using AI: “A lawyer may not charge an hourly fee in excess of the time actually spent on the case and may not bill for time saved by using generative AI. The lawyer may bill for actual time spent using generative AI in a client’s matter or may wish to consider alternative fee arrangements to account for the value generated by the use of generative AI.”

On the issue of passing the costs of AI software on to clients, the Bar concluded that doing so was not permissible unless the fee is both reasonable and “permitted by the fee agreement.”

Finally, the bar focused on a handful of recent court rules issued that forbid the use of AI for document preparation, highlighting the importance of being aware of and complying with all court disclosure requirements regarding AI usage.

The Virginia State Bar’s flexible and practical AI ethics guidance offers a valuable framework for lawyers as they adjust to the ever-changing generative AI landscape. By focusing on overarching principles, this thoughtful approach ensures adaptability as technology evolves. For those seeking reliable guidance, Virginia’s model offers a useful roadmap for remaining ethically grounded amid unprecedented technological advancements.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


The ABA Weighs in on the Ethical Use Of AI

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

The ABA Weighs in on the Ethical Use Of AI

Generative artificial intelligence (GenAI) is advancing at exponential rates. Since the release of GPT-4 less than two years ago, there has been an explosion of GenAI tools designed for legal professionals. With the rapid proliferation of software incorporating this technology comes increased concerns about ethical and secure implementation. 

Ethics committees across the country have stepped up to the plate to offer guidance to assist lawyers seeking to adopt GenAI into their firms. Most recently, the American Bar Association weighed in, handing down Formal Opinion 512 at the end of July. 

In its opinion, the ABA Standing Committee on Ethics and Professional Responsibility acknowledged the significant productivity gains that GenAI can offer legal professionals, explaining that GenAI “tools offer lawyers the potential to increase the efficiency and quality of their legal services to clients…Lawyers must recognize inherent risks, however." 

Importantly, the Committee also cautioned that when using these tools, lawyers “lawyers may not abdicate their responsibilities by relying solely on a GAI tool to perform tasks that call for the exercise of professional judgment.” In other words, while GenAI can significantly increase efficiencies, lawyers should not rely on it at the expense of their personal judgment.

Next, the Committee addressed the key ethical issues presented when lawyers incorporate GenAI tools into their workflows. First and foremost, technology competency was emphasized. According to the Committee, lawyers must stay updated on the evolving nature of GenAI technologies and have a reasonable understanding of the technology’s benefits, risks, and limitations.

Confidentiality obligations were also discussed, and the Committee highlighted the need to ensure that GenAI does not inadvertently expose client data and that systems should not be allowed to train on confidential data. Notably, the Committee required lawyers to obtain informed client consent before using these tools in ways that could impact client confidentiality, especially when using consumer-facing tools that train on inputted data.

The Committee also provided guidance on supervision requirements, advising that lawyers in managerial roles must ensure compliance with their firms’ established GenAI policies. The supervisory duty includes implementing policies, training personnel, and supervising the use of AI to prevent ethical violations.

The Committee highlighted the importance of reviewing all GenAI output to ensure its accuracy: “(D)uties to the tribunal likewise require lawyers, before submitting materials to a court, to review these outputs, including analysis and citations to authority, and to correct errors, including misstatements of law and fact, a failure to include controlling legal authority, and misleading arguments.”

Finally, the Committee offered insight into the ethics of legal fees charged when using GenAI to address client matters. The Committee explained that lawyers may charge fees encompassing the time spent reviewing AI-generated outputs but may not charge clients for time spent learning to use GenAI software. Importantly, it is impermissible for lawyers to invoice clients for time that would have been spent on work but for the efficiencies gained from using GenAI tools. In other words, clients can only be billed for the work completed, not for time saved due to GenAI.

Each new ethics opinion, like ABA Formal Opinion 512, offers much-needed guidance that enables lawyers to integrate AI tools into their firms thoughtfully and responsibly. By addressing emerging concerns and providing clear standards, these opinions reduce uncertainty and pave the way for forward-thinking lawyers to adopt GenAI confidently. While the ABA’s opinion is only advisory, it represents a positive trend of responsive guidance that arms the legal profession with the information needed to innovate ethically and adopt emerging technologies in today’s ever-changing AI era.

Nicole Black is a Rochester, New York attorney, author, journalist, and the Principal Legal Insights Strategist at MyCase, LawPay, CASEpeer, and Docketwise, AffiniPay companies. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


AI’s Role in Modern Law Practice Explored by Texas and Minnesota Bars

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

AI’s Role in Modern Law Practice Explored by Texas and Minnesota Bars

If you’re not yet convinced that artificial intelligence (AI) will change the practice of law, then you’re not paying attention. If nothing else, the sheer number of state bar ethics opinions and reports focused on AI released within the past two years should be a clear indication that AI’s effects on our profession will be profound.

Just this month, the Texas and Minnesota bar associations stepped into the fray, each issuing reports that studied the issues presented when legal professionals use AI. 

First, there was the Texas Taskforce for Responsible AI in the Law’s “Interim Report to the State Bar of Texas Board of Directors,” which addressed the benefits and risks of AI, along with recommendations for the ethical adoption of these tools.

The Minnesota State Bar Association (MSBA) Assembly’s report, “Implications of Large Language Models (LLMs) on the Unauthorized Practice of Law (UPL) and Access to Justice,” assessed broader issues related to how AI could potentially impact the provision of legal services within our communities. 

Despite the divergence in focus, the reports covered a significant overlap of topics. For example, both reports emphasized the ethical use of AI and the importance of ensuring AI increases rather than reduces access to justice.

However, approaches to both issues differed. While the Texas Taskforce sought to develop guidelines for ethical AI use, the MSBA report suggested that there was no need to reinvent the wheel and that existing ethical guidance issued by other jurisdictions about AI tools like LLMs was likely sufficient to assist Minnesota legal professionals in navigating AI adoption.

There was also a joint focus on access to justice. Both reports included an emphasis on the value of ensuring that AI tools enhance access to justice. The Texas Taskforce highlighted the need to support legal aid providers in obtaining access to AI. At the same time, the MSBA’s Assembly recommended the creation of an “Access to Justice Legal Sandbox” that “would provide a controlled environment for organizations to use LLMs in innovative ways, without the fear of UPL prosecution.”

Overall, the MSBA Assembly’s approach was more exploratory, while the Texas Taskforce’s was more advisory. The MSBA Assembly’s report included recommendations to take more detailed, actionable steps like creating an AI regulatory sandbox, launching pilot projects, and creating a Standing Committee to consider recommendations made in the report.  In comparison, the Texas Taskforce identified broader goals such as raising awareness of cybersecurity issues surrounding AI, emphasizing the importance of AI education and CLEs, and proposing AI implementation best practices.

The issuance of these reports on the tails of other bar association guidance represents a significant step forward for the legal profession. While we’ve historically resisted change, we’re now looking forward rather than backward. Bar associations are rising to the challenge during this period of rapid technological advancements and providing lawyers with much-needed, practical guidance and advice designed to help them navigate the ever-changing AI landscape.  

While Texas focuses on comprehensive guidelines and educational initiatives, Minnesota’s approach includes regulatory sandboxes and pilot projects. These differing strategies reflect a shared commitment to ensuring AI enhances access to justice and improves the lives of legal professionals. Together, these efforts indicate a profession that is, at long last, willing to adapt and innovate by leveraging emerging technologies to better serve society and uphold justice in an increasingly digital-first age.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase legal practice management software and LawPay payment processing, AffiniPay companies. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].

 


Pre-Trial AI Tools For Lawyers

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

Pre-Trial AI Tools For Lawyers

I often receive emails from lawyers who reach out after having read one of my articles about generative artificial intelligence (AI) tools. They frequently seek advice about implementing AI software in their firm. Some focus on ethics and accuracy concerns, while others ask for my input on which tools to use to address a workflow issue in their firm. These communications can sometimes inspire me to write articles about a specific type of AI software since it’s a safe bet that other lawyers may be struggling with the same issue.

Recently, many emails have focused on pre-trial AI tools for lawyers. This makes sense since AI tools can streamline many repeatable and tedious tasks involved during the discovery and motion stages of a case. 

Of course, both legalese and litigation processes are complex, which means that consumer-focused generative AI tools such as ChatGPT or Claude are often inadequate, producing less-than-ideal output. Fortunately, legal technology companies that thoroughly understand legal workflows and lawyers’ unique needs are much better positioned to develop tools that streamline pre-trial workflows and generate reliable and useful content.

Because AI can address many of the pain points encountered during the early stages of litigation, it’s no surprise that AI software tools have been released over the past year that can address many pre-litigation workflow challenges. 

Now that those products are available let’s review some of the top categories. Note that I have not tested out most of these tools and only provide information regarding available software. You must carefully vet the providers and take advantage of free trials and demos offered before settling on a tool. To assist with the vetting process, you’ll find a list of suggested questions to ask legal cloud and AI providers here. 

The first category of AI tools we’ll consider is pre-trial discovery management. This software automates the tedious and redundant process of preparing routine pleadings, discovery requests, and discovery responses. Upon uploading complaints and other legal documents, this software will typically generate responsive pleadings, such as answers, interrogatories, requests for admission, and document requests and responses. The following AI software can assist in drafting these types of documents: Legalmation, Ai.law, Briefpoint, and LexthinkAI. 

Next are AI tools for deposition summary and analysis. This software leverages AI algorithms to reduce the time spent reviewing and obtaining insights from deposition transcripts. Automating these tasks significantly streamlines the review process, allowing for more efficient case preparation and strategy development. Tools that offer this functionality include Legalmation, Lexthink.ai, Casemark, Lexis + AI, and  CoCounsel, a Thomson Reuters company.

Finally, there are AI tools that assist with brief drafting and analysis. AI technology is beneficial in this context since it can help edit text, improve writing, and change tone, reducing the time needed to write complex legal documents. These tools typically function within word processing tools such as Word. Products that assist with brief writing functions include Clearbrief, Briefcatch, EZBriefs, and Wordrake.

Pre-trial AI tools are more than document robots; they're powerful allies that reduce friction and enhance efficiency. Even if you’re not yet ready to invest in these tools at this early stage, it’s worthwhile to arm yourself with information about this category of software for future reference. They will undoubtedly increase in sophistication over time and have great potential. With these tools on your side, either now or down the road, you’ll be able to focus on crafting winning arguments while AI tackles tedious pre-trial tasks. The result? Less stress, happier lawyers and clients, and a future-proofed legal practice.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase legal practice management software and LawPaypayment processing, AffiniPay companies. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].

 

 

 

 

 

 

 


New Report Highlights GenAI Adoption Trends in Law

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

New Report Highlights GenAI Adoption Trends in Law

For legal professionals facing an ever-evolving technology landscape shaped by rapid advancements in artificial intelligence, data-driven decisions are the key to successful adaptation. Because change is occurring quickly, up-to-date information is key. That’s where the Thomson Reuters Institute 2024 Generative AI in Professional Services Report comes in.

This report highlights how professionals, including lawyers, view and use generative AI (GenAI). It offers insights into legal professionals’ attitudes and adoption rates and provides law firm leaders with timely industry data. Using this information, you can make informed choices about when and how to implement GenAI in your firm.

First, let’s consider legal professionals’ perspectives on GenAI. The report shows that only a slight majority view it as an appropriate tool for use in a law firm. 85% of legal professionals believe AI could be applied to their work, while only 51% say it should.

Data from the report also indicates that ethical concerns about the unauthorized practice of law could drive some of the reticence surrounding GenAI. The majority (77%) of legal respondents cited this issue as either a significant threat or somewhat of a threat to the profession.

Our judicial counterparts are even more cautious about incorporating GenAI into their workflows, with 60% having no current plans to use it and only 8% currently experimenting with it

Also notable is that legal-specific GenAI tools are not yet mainstream in our profession. According to the Report, only 12% of legal professionals report using legal-specific GenAI tools today, but 43% plan to do so within the next three years. In comparison, consumer GenAI tools are more popular presently, with 27% of legal industry respondents using them and another 20% planning to do so within the next three years. In other words, within a few years, the adoption of legal-specific tools will far outpace that of consumer tools in the legal space, and rightly so, since legal providers have a far better understanding of the unique needs of legal professionals.

For those currently using GenAI, top use cases in law firms currently include legal research, document review, brief or memo drafting, document summarization, and correspondence drafting.

Data from the report showed that compared to their law firm counterparts, corporate legal departments are more document-focused in their GenAI usage. Contract drafting comes in first, followed by document review, legal research, document summarization, and extracting contract data.

Similarly, government and court respondents also focused primarily on leveraging GenAI tools to work with documents. Use cases included legal research, document review, document summarization, brief or memo drafting, and contract drafting.

Another interesting data point from this report revolved around perspectives on shifting the cost of GenAI tools when used to provide legal services. According to the report, law firms report primarily absorbing GenAI investment costs as firm overhead (51%), with a smaller portion passing the costs to customers on a case-by-case basis (16%) or across the board (9%). 4% use other methods, and 20% have not yet determined their approach.

Alternative pricing for legal services was also discussed, with more than a third of respondents(39%) sharing that GenAI may result in an increase in the use of alternative fees. Even so, another 28% were unclear as to how GenAI adoption might impact law firm billing moving forward.

Last but not least, recruitment. 45% of legal professionals surveyed indicated that their firms do not plan to target applicants with AI or GenAI skills (45%), while 17% identified it as a "nice to have" skill. Only 2% said their firms would require it.

If you haven’t read this report, now’s the time. It provides valuable data that highlights the growing awareness of AI's potential impact on our profession, even though adoption rates vary. Many legal professionals see the value of AI but remain cautious about fully embracing it. 

The findings from this report offer valuable insights that can guide law firm leaders in making informed decisions about integrating AI into their firm’s workflows. As AI technology advances, insights like these will help you strategically decide when and how to implement GenAI, ultimately shaping the future of your law practice.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase legal practice management software and LawPaypayment processing, AffiniPay companies. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].

 


Balancing Innovation and Ethics: Kentucky Bar Association’s Preliminary Stance on AI for Lawyers

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

Balancing Innovation and Ethics: Kentucky Bar Association’s Preliminary Stance on AI for Lawyers

The rapid advancement of generative artificial intelligence (AI) technology has had many effects, one of which has been to spur bar association ethics committees into action. In less than two years, at least eight jurisdictions have issued AI guidance in one form or another, including California, Florida, New Jersey, Michigan, and New York, which I’ve covered in this column. 

Most recently, I discussed a joint opinion from the Pennsylvania Bar Association Committee on Legal Ethics and Responsibility and the Philadelphia Bar Association Professional Guidance Committee, Joint Formal Opinion 2024-200, and promised to subsequently tackle Kentucky’s the Kentucky Bar Association’s March opinion, Ethics Opinion KBA E-457, which I’ll cover today. 

This opinion was issued in March and was published to the KBA membership in the May/June edition of the Bench & Bar Magazine. After the 30-day public comment period expires, it will become final.

This opinion covers a wide range of issues, including technology competency, confidentiality, client billing, notification to courts and clients regarding AI usage, and the supervision of others in the firm who use AI. 

Notably, when providing the necessary context for the guidance provided, the Committee wisely acknowledged that hard and fast rules regarding AI adoption by law firms are inadvisable since the technology is advancing rapidly, and every law firm will use it in different, unique ways: “The Committee does not intend to specify what AI policy an attorney should follow because it is the responsibility of each attorney to best determine how AI will be used within their law firm and then to establish an AI policy that addresses the benefits and risks associated with AI products. The fact is that the speed of change in this area means that any specific recommendation will likely be obsolete from the moment of publication.”

Accordingly, the Committee’s advice was fairly elastic and designed to change with the times as AI technology improves. The Committee emphasized the importance of maintaining technology competency, which includes staying “abreast of the use of AI in the practice of law,” along with the corresponding duties to continually take steps to maintain client confidentiality and to carefully “review court rules and procedures as they relate to the use of AI, and to review all submissions to the Court that utilized Generative AI to confirm the accuracy of the content of those filings.”

As other bar associations have done, the Kentucky Bar Ethics Committee also highlighted the issues surrounding client communication and billing when using AI to streamline legal work. 

Departing from the hard and fast requirement that some bars have put in place regarding notifying clients whenever AI is used in their matter, the Committee took the more moderate approach. It required that lawyers do so only under certain circumstances. The Committee explained that there is no “ethical duty to disclose the rote use of AI generated research for a client's matter unless the work is being outsourced to a third party; the client is being charged for the cost of AI; and/or the disclosure of AI generated research is required by Court Rules.” 

Next, the Committee determined that when invoicing a client for work performed more efficiently when using AI, lawyers should “consider reducing the amount of attorney's fees being charged the client when appropriate under the circumstances.” Similarly, lawyers may pass on expenses related to AI software if there is an acknowledgment in writing whereby the client agrees in advance to reimburse the attorney for the attorney's expense in using AI.” However, the Committee cautioned that the “costs of AI training and keeping abreast of AI developments should not be charged to clients.”

Finally, the Committee confirmed that lawyers who are partners or managers have a duty to ensure the ethical use of AI by other lawyers and employees, which involves appropriate training and supervision.

This opinion provides a thorough analysis of the issues and sound advice regarding AI usage in law firms. I’ve only hit the high points, so make sure to read the entire opinion for the Committee’s more nuanced perspective, especially if you are a Kentucky attorney. AI is here to stay and will inevitably impact your practice, likely much sooner than you might expect, given the rapid change we’re now experiencing. Invest time into learning about this technology now, so you can adapt to the times and incorporate it into your law firm, ultimately providing your clients with more efficient and effective representation.

Nicole Black is a Rochester, New York attorney, author, journalist, and the Head of SME and External Education at MyCase legal practice management software and LawPaypayment processing, AffiniPay companies. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


More AI Ethics Guidance Arrives With Pennsylvania Weighing In

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

More AI Ethics Guidance Arrives With Pennsylvania Weighing In

The rate of technological change this year has been off the charts. Lately, there’s daily news of generative artificial intelligence (AI) announcements about new products, feature releases, or acquisitions. Advancement has been occurring at such a rapid clip that it’s more challenging than ever to keep up with the pace of change — blink, and you’ll miss it!

Given how quickly AI has infiltrated our lives and profession, it’s been all the more impressive to watch bar association professional disciplinary committees step up to the plate and issue timely, much-needed guidance. Even though generative AI has been around for less than two years, California, Florida, New Jersey, Michigan, and New York had already issued GenAI guidance for lawyers as of April 2024.

Just a few months later, two other states, Pennsylvania and Kentucky, have weighed in, providing lawyers in their jurisdictions with roadmaps for ethical AI usage. Today, I’ll discuss the Pennsylvania guidance and will cover Kentucky’s in my next article.

On May 22, the Pennsylvania Bar Association Committee on Legal Ethics and Responsibility and the Philadelphia Bar Association Professional Guidance Committee issued Joint Formal Opinion 2024-200. In the introduction to the opinion, the joint Committee explained why it is critical for lawyers to learn about AI: “This technology has begun to revolutionize the way legal work is done, allowing lawyers to focus on more complex tasks and provide better service to their clients…Now that it is here, attorneys need to know what it is and how (and if) to use it.” A key way to meet that requirement is to take advantage of “continuing education and training to stay informed about ethical issues and best practices for using AI in legal practice.”

The joint Committee emphasized the importance of understanding both the risks and benefits of incorporating AI into your firm’s workflows. It also stated that if used appropriately and “with appropriate safeguards, lawyers can utilize artificial intelligence” in a compliant manner. 

The opinion included many recommendations and requirements for lawyers planning to use AI in their practices. First and foremost, the Committees emphasized basic competence and the need to “ensure that AI-generated content is truthful, accurate, and based on sound legal reasoning.” This obligation requires lawyers to confirm “the accuracy and relevance of the citations they use in legal documents or arguments.” 

Another area of focus was on protecting client confidentiality. The joint Committee opined that lawyers must take steps to vet technology providers with the end goal being to “safeguard information relating to the representation of a client and ensure that AI systems handling confidential data adhere to strict confidentiality measures.”

Notably, the joint Committee highlighted the importance of ensuring that AI tools and their output are unbiased and accurate. This means that when researching a product and provider, steps must be taken to “ensure that the data used to train AI models is accurate, unbiased, and ethically sourced to prevent perpetuating biases or inaccuracies in AI-generated content.”

Transparency with clients was also discussed. Lawyers were cautioned to ensure clear communication “with clients about their use of AI technologies in their practices…(including) how such tools are employed and their potential impact on case outcomes.” Lawyers were also advised to clearly communicate with clients about AI-related expenses, which should be “reasonable and appropriately disclosed to clients.”

This guidance — emphasizing competence, confidentiality, and transparency —is a valuable resource for lawyers seeking to integrate AI into their practices. This timely advice helps ensure ethical AI usage in law firms, especially for Pennsylvania practitioners. For even more helpful ethics analysis, stay tuned for my next article, where we’ll examine Kentucky's recent AI guidance.

Nicole Black is a Rochester, New York attorney, author, journalist, and the Head of SME and External Education at MyCase legal practice management software and LawPaypayment processing, AffiniPay companies. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].

 

 




The GenAI Courtroom Conundrum

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

The GenAI Courtroom Conundrum

In the wake of ChatGPT-4's release a year ago, there has been a notable uptick in the use of generative artificial intelligence (AI) tools by lawyers when drafting court filings. However, with this embrace of cutting-edge technology there has been an increase in well-publicized incidents involving fabricated case citations.

Here is but a sampling of incidents from earlier this year:

  • A Massachusetts lawyer faced sanctions for submitting multiple memoranda riddled with false case citations (2/12/24).
  • A British Columbia attorney was reprimanded and ordered to pay opposing counsel's costs after relying on AI-generated "hallucinations" (2/20/24).
  • A Florida lawyer was suspended by the U.S. District Court for the Middle District of Florida for filing submissions based on fictitious precedents (3/8/24).
  • A pro se litigant's case was dismissed after the court called them out for submitting false citations for the second time (3/21/24).
  • The 9th Circuit summarily dismissed a case without addressing the merits due to the attorney's reliance on fabricated cases (3/22/24).

Judges have been understandably unhappy with this trend, and courts across the nation have issued a patchwork of orders, guidelines, and rules regulating the use of generative AI in their courtrooms. According to data collected by RAILS (Responsible AI in Legal Services) in March, there were 58 different directives on record at that time.

This haphazard manner of addressing AI usage in courts is problematic. First, it fails to provide the much-needed consistency and clarity. Second, it evinces a lack of understanding about the extent to which AI has been embedded within many technology products used by legal professionals for many years now — in ways that are not always entirely transparent to the end user.

Fortunately, as this technology has become more commonplace and is better understood, some of our judicial counterparts have recently begun to revise their perspective, offering newfound approaches to AI-supported litigation filings. They have wisely decided that rather than over-regulating the technology, our profession would be better served if there was reinforcement of existing rules that require due diligence and careful review of all court submissions, regardless of the tools employed.

For example, earlier this month, the Fifth Circuit U.S. Court of Appeals in New Orleans reversed course and chose not to adopt a proposed rule that would have required lawyers to certify that if an AI tool was used to assist in drafting a filing, all citations and legal analysis had been reviewed for accuracy. Lawyers who violated this rule could have faced sanctions and the risk that their filings would be stricken from the record. In lieu of adopting the rule, the Court advised lawyers to ensure “that their filings with the court, including briefs…(are) carefully checked for truthfulness and accuracy as the rules already require.”

In another case, a judge for the Eleventh Circuit U.S. Court of Appeals used ChatGPT and other generative AI tools to assist in writing his concurrence in Snell v. United Specialty Insurance Company, No. 22-12581. In his concurrence he explained that he used the tools to aid in his understanding of what the term “landscaping” meant within the context of the case. The court was tasked with determining whether the installation of an in-ground trampoline constituted “landscaping” as defined by the insurance policy applicable to a negligence claim. Ultimately he found that the AI chatbots did, indeed, provide the necessary insight, and referenced this fact and in the opinion.

In other words, the time they are a’changin’. The rise of generative AI in legal practice has brought with it significant challenges, but reassuringly, the legal community is adapting.  The focus is beginning to shift from restrictive regulations towards reinforcing existing ethical standards, including technology competence and due diligence. Recent developments suggest a balanced approach is emerging—acknowledging AI's potential while emphasizing lawyers' responsibility for accuracy. This path forward strikes the right balance between technological progress and professional integrity, and my hope is that more of our esteemed  jurists choose this path.

Nicole Black is a Rochester, New York attorney, author, journalist, and the Head of SME and External Education at MyCase legal practice management software and LawPaypayment processing, AffiniPay companies. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].

 

 

 




New York On the Ethics of Expensing Credit Card Processing Fees to Clients

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

New York On the Ethics of Expensing Credit Card Processing Fees to Clients

One of the key business challenges lawyers face is getting paid. When cash or check were the only choices, there was little payment flexibility available for law firms or their clients. Today, things have changed. Most billing and law practice management software programs have built-in features that streamline the billing process and allow law firms to offer payment convenience in ways never before possible. From payment plans to credit cards and even “Pay Later” legal fee loan options, lawyers and their clients have more options than ever.

Regardless of the payment method, lawyers must comply with the ethics requirements surrounding legal fees. As new payment methods become available, ethics committees often weigh in to ensure that lawyers have sufficient guidance when accepting alternative payment methods. 

One area that has received considerable attention from regulators over the years is credit card payments, which are now commonly accepted in most law firms. Despite their widespread application, novel ethics issues surrounding credit card payments occasionally arise, which require input, such as the recent issue addressed by the New York State Bar Association Committee on Professional Ethics in Ethics Opinion 1258A.

At issue was whether a lawyer may pass on merchant processing fees to clients as an expense. At the outset, the Committee acknowledged that accepting credit cards as payment has long been permissible in New York provided that 1) the legal fee is reasonable, 2) client confidentiality is protected, 3) the credit card company’s actions do not impact client representation, 4) the client is advised before the charge is incurred and has the chance to dispute any billing errors, and 5) any disputes regarding the legal fee are handled according to the fee dispute resolution program outlined in 22 N.Y.C.R.R. Part 137.

Next, the Committee turned to the issue of expensing credit card fees to clients, explaining that excessive fees or expenses are prohibited by Rule 1.5(a) of the New York Rules of Professional Conduct (Rules). 

According to the Committee, this prohibition applies to a merchant processing fee since it is considered an “expense” under the Rules. As long as lawyers avoid charging excessive fees as defined in Rule 1.5(a), it is permissible to pass on merchant processing fees incurred when legal fees are paid by credit card to clients as expenses.  

Next, the Committee turned to Ethics Opinion 1050 from 2015, which addressed credit card payments made in the context of advance retainers. In that opinion, the Committee permitted the inquiring lawyer to, “as an administrative convenience, charge a client a nominal amount over the actual processing fees imposed on the lawyer by a credit card company in connection with the client’s payment by credit card of the lawyer’s advance payment retainer.”  

Doing so was conditioned upon 1) notifying the client and obtaining consent and 2) ensuring the additional fee was nominal and the total amount of the advance payment retainer, the processing fees, and the convenience fee were likewise reasonable under the circumstances.

The Committee then turned to the case at hand and applied the same principles, concluding that when legal fees beyond the initial retainer are paid by credit card, a “lawyer may pass on a merchant processing fee to clients who pay for legal services by credit card provided that both the amount of the legal fee and the amount of the processing fee are reasonable, and provided that the lawyer has explained to the client and obtained client consent to the additional charge in advance.”

In 2024, lawyers have unprecedented flexibility in payment methods. However, a thorough understanding of your ethical obligations is essential, especially when your firm broadens client payment options. This opinion is an important reminder to carefully navigate ethics rules when accepting credit card payments from clients, especially as technology continues to evolve and impact how law firms do business. 

Nicole Black is a Rochester, New York attorney, author, journalist, and the Head of SME and External Education at MyCase legal practice management software and LawPaypayment processing, AffiniPay companies. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].

 


ABA Weighs in on Listserv Ethics

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

ABA Weighs in on Listserv Ethics

At first glance, you might assume that this article was published in the early 1990s and was reprinted by mistake. If so, you’d be wrong. The truth is, the American Bar Association (ABA), in its infinite wisdom, decided that May 2024—in the midst of the generative AI technology revolution—was the ideal time to address the ethical issues presented when lawyers interact on listservs, an email technology that has existed since 1986.

So hold on to your hats, early technology adopters, while we break this opinion down so that you have the ethics guidance needed to appropriately interact when using technology that has been around longer than the World Wide Web.

In Formal Opinion 511, the ABA considered whether lawyers interacting on listservs who sought advice regarding a client matter was “impliedly authorized to disclose information relating to the representation of a client or information that could lead to the discovery of such information.”

At the outset, the ABA Standing Committee on Ethics and Professional Responsibility explained that the duty of confidentiality prohibits lawyers from disclosing any information related to a client’s representation, no matter the source. Protected information is not limited to “communications protected by attorney-client privilege” and includes clients’ identities and even publicly available information like transcripts of proceedings.

Next, the Committee acknowledged that generally speaking, lawyers are permitted to consult with an attorney outside of their firm regarding a matter and may reveal information related to the representation in the absence of client consent, but only if the “information is anonymized or presented as a hypothetical and the information is revealed under circumstances in which…the information will not be further disclosed or otherwise used against the consulting lawyer’s client.” In addition, the information shared may not be privileged and must be non-prejudicial.

However, according to the Committee, this implied authority to disclose anonymized or hypothetical case-related information to other attorneys is limited to professional consultations with other lawyers. This is because “participation in most lawyer listserv discussion groups is significantly different from seeking out an individual lawyer or personally selected group of lawyers practicing in other firms for a consultation about a matter.”

The Committee noted that listservs can consist of unknown participants, and posts can be forwarded or otherwise shared and viewed by non-participants, including other lawyers representing a party in the same matter. As a result, “posting to a listserv creates greater risks than the lawyer-to-lawyer consultation.”

Given the risks, in the absence of client consent, lawyers are ethically prohibited from posting anything to a listserv that could reasonably be linked to an identifiable client, whether the intent is to obtain assistance in a case or otherwise engage on the listserv. 

Listserv use is not forbidden, however, and lawyers can interact in other ways. For example, asking more general questions, sharing news updates, requesting access to a case, or seeking a form or document template.

Finally, the Committee expanded the opinion’s rationale to other types of interactions. The Committee opined that the “principles set forth in this opinion…apply equally when lawyers communicate about their law practices with individuals outside their law firms by other media and in other settings, including when lawyers discuss their work at in-person gatherings.”

That single line, easily missed at the beginning of the opinion, ensures that the Committee’s conclusions stand the test of time. 

This opinion on listserv ethics is a necessary reminder of the importance of confidentiality in all lawyer interactions, even when using long-established technologies like listservs. While the ABA’s timing could have been better, this advisory opinion is worth a thorough read. Take a look and then keep the Committee’s advisements in mind as you interact with other lawyers online and off. 

Nicole Black is a Rochester, New York attorney, author, journalist, and the Head of SME and External Education at MyCase legal practice management software and LawPaypayment processing, AffiniPay companies. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].

 


New Reports Highlight Generative AI Adoption Trends in Law

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

New Reports Highlight Generative AI Adoption Trends in Law

Generative artificial intelligence (GenAI) technology is advancing exponentially compared to technologies of years past. The pace of change is happening so quickly that it’s hard to keep up. One way to track the impact of GenAI tools on the legal profession is through survey reports, so I’m always interested when new ones are released that include data on perspectives about and rates of adoption of GenAI by lawyers.

This year, two reports with a similar focus were released. In mid-April, Thomson Reuters published the “2024 Generative AI in Professional Services Report,” (TR Report) where the survey was conducted in January and February 2024, and included data on law firm use of GenAI. Earlier in the year, the “2024 MyCase + LawPay Legal Industry Report,” (ML Report) was released. The survey for that report occurred during August and September of 2023, and the report was released in January. 

Comparing statistics from the two reports provides useful benchmarks that highlight trends in GenAI usage by legal professionals. 

The reports showed that the number of legal professionals who personally used GenAI tools for work-related reasons held steady, with 27% of respondents from both surveys using GenAI for work-related purposes. 

Similarly the data regarding how often those same individuals used GenAI was remarkably similar across the surveys. According to the TR Survey results, 42% of legal professionals who were actively using or planning to use GenAI were using the technology at least daily, and 31% were using it weekly. The ML data showed that legal professionals whose firms had adopted GenAI frequently relied on it throughout their workday, with 42% using it daily, and 29% using it weekly.

Another interesting finding from the ML Report was that for those already using GenAI, for 53%, efficiency increased somewhat, and for 24% it increased significantly. 

The tasks accomplished with GenAI were similar across both reports as well. According to the TR Report respondents, the top five use cases in law firms were legal research, document review, brief or memo drafting, document summarization, and drafting correspondence. 

For the ML report, the top use case was brainstorming (58%), followed by drafting correspondence followed (55%), general research (as opposed to legal research) (46%), document drafting (42%),  drafting document templates (39%), summarizing documents (38%), and editing documents (34%).

Concerns uniquely related to the issues presented when GenAI tools are used by legal professionals were also mirrored across the reports. The top worry of respondents from the TR Report was inaccurate responses, cited by 79%. This was followed by data security (68%); privacy and confidentiality of data (62%); compliance with laws and regulations (60%); and ethical and responsible usage (57%).

The top blockers identified in the ML Report were lack of knowledge about the technology (52%), concerns about ethical issues (39%), lack of trust in GAI output (39%), the infancy of the technology (33%), and concerns about privilege issues (25%).

Other notable data from the TR report addressed judicial perspectives on GenAI and the legal billing ramifications. First, survey responses highlighted the distrust and reticence about GenAI often expressed by members of our nation’s courts. Of the court respondents, only 8% indicated that their court systems were using or planned to use GenAI, and 60% reported there were no current plans to use it.

The perceived impact of GenAI on legal billing rates and methods was also explored. Notably, the majority of legal professionals, 58%, did not believe that GenAI would have any effect on the rates charged to clients. Over a third (39%), however, believed GenAI would lead to an increase in alternative fee arrangements. I tend to agree with those respondents and think there is a significant likelihood that sophisticated legal consumers like insurance companies and corporate counsel will put pressure on their legal providers to use GenAI to work more efficiently and thus charge more competitive, predictable legal fees.

Last but not least, the ML Report provided insight into how legal professionals expected GenAI to affect hiring practices. The survey results showed that 28% of firms planned to replace administrative functions with GenAI, followed by legal-specific functions (18%) and currently outsourced functions (13%). Another 10% shared that their firms intended to fully replace an administrative employee and 2% hoped to replace a lawyer.

These surveys highlight the steady pace of adoption of GenAI tools by legal professionals with the end result being enhanced efficiency and reduced workflow frictions. Despite the ongoing and valid concerns about accuracy and ethical issues, the legal profession has warmed up to GenAI far more quickly than the technologies that preceded it, and is gradually integrating GenAI into law firm operations. 

The times they are a-changin’ like never before. Now, more than ever, it’s essential to be open-minded and curious about emerging technologies, lest you be left behind.

Nicole Black is a Rochester, New York attorney, author, journalist, and the Head of SME and External Education at MyCase legal practice management software and LawPaypayment processing, AffiniPay companies. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].

 


Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

Top 5 LinkedIn Tips Every Lawyer Should Know

When I started writing this column in 2007, I often covered social media use for lawyers. However, because my interest lies in emerging technologies, the focus of my articles necessarily shifted over time as new advancements arrived that had the potential to change the legal profession.

Even though social networking may be considered old news in the technology world, online interactions continue to have a noticeable impact on the practice of law. Some platforms have gained increasing relevance while others have declined. LinkedIn is a prime example of a social media site that has gained ground since the pandemic, transforming from what was essentially an online resume repository to an active, engaging online site.

Given its significant rise, an update seemed necessary. I have over 207,000 followers on LinkedIn, so I have experience with the site and lots of advice to share! To that end, below you’ll find my top 5 tips for lawyers seeking to increase their presence on LinkedIn.

First and foremost, determine your goals. If you don’t know what you’re trying to achieve by interacting on LinkedIn, then your efforts will be wasted. Are you trying to reach potential clients? Is your intent to expand your professional network and increase referrals from colleagues? Or are you seeking to stay on top of the latest industry news and trends? Whatever your goals are, identify them before diving in. They will necessarily impact your engagement on the site.

Next, ensure that you have created a robust LinkedIn profile. Your headline should concisely describe your role and value to both clients and the profession.The headline section of your profile should concisely describe what you do and the value you bring to your clients and the profession. The first few words of your headline will appear whenever you comment on someone else’s post so are very important. Only include the most relevant work history, and carefully consider whether you want the dates that you obtained your education degrees to appear on your profile. Your age and stage of life will necessarily impact your preferences. 

The third tip is to post with a regular cadence. Your posting frequency will depend on your goals and the amount of time you have available to focus on your LinkedIn presence. Whether you post once a week or every other day, make sure to stick to your plan. That way your followers will know when to expect to hear from you. The LinkedIn algorithm also frowns on erratic posting patterns, so make a plan and stick to it. You’ll reach more connections that way, leading to greater engagement and success.

Fourth, post thoughtfully. Share a mix of personal observations intermixed with professional updates. Avoid blasting your successes and triumphs into the ether in the absence of other updates that include your personal and insightful perspective on trends or news of interest to your followers. The algorithm favors early morning posts that include an image, so keep that in mind. Finally, post links to any news stories or other website links in the comments rather than in the post since LinkedIn prefers posts that don’t send users to other websites.

Last but not least, carefully curate your network. Follow people who interest you and conform to your goals, develop a community of like-minded individuals, and consistently engage with your network. Read the posts of others and like, comment, and share them, when appropriate. LinkedIn, like all social media sites, is about engagement, so engage with others rather than talking at them from your virtual podium.

LinkedIn is a very different site than it was before the pandemic. Its newfound levels of engagement from professionals worldwide have resulted in a dynamic community of professionals that should not be overlooked. So don’t rest on your laurels. Take advantage of the many advantages it offers by following the I shared above. By implementing these tips, you’ll be well-positioned to maximize your impact and networking potential on LinkedIn.

Nicole Black is a Rochester, New York attorney, author, journalist, and the Head of SME and External Education at MyCase legal practice management software and LawPaypayment processing, AffiniPay companies. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].