Previous month:
June 2024
Next month:
August 2024

AI’s Role in Modern Law Practice Explored by Texas and Minnesota Bars

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

AI’s Role in Modern Law Practice Explored by Texas and Minnesota Bars

If you’re not yet convinced that artificial intelligence (AI) will change the practice of law, then you’re not paying attention. If nothing else, the sheer number of state bar ethics opinions and reports focused on AI released within the past two years should be a clear indication that AI’s effects on our profession will be profound.

Just this month, the Texas and Minnesota bar associations stepped into the fray, each issuing reports that studied the issues presented when legal professionals use AI. 

First, there was the Texas Taskforce for Responsible AI in the Law’s “Interim Report to the State Bar of Texas Board of Directors,” which addressed the benefits and risks of AI, along with recommendations for the ethical adoption of these tools.

The Minnesota State Bar Association (MSBA) Assembly’s report, “Implications of Large Language Models (LLMs) on the Unauthorized Practice of Law (UPL) and Access to Justice,” assessed broader issues related to how AI could potentially impact the provision of legal services within our communities. 

Despite the divergence in focus, the reports covered a significant overlap of topics. For example, both reports emphasized the ethical use of AI and the importance of ensuring AI increases rather than reduces access to justice.

However, approaches to both issues differed. While the Texas Taskforce sought to develop guidelines for ethical AI use, the MSBA report suggested that there was no need to reinvent the wheel and that existing ethical guidance issued by other jurisdictions about AI tools like LLMs was likely sufficient to assist Minnesota legal professionals in navigating AI adoption.

There was also a joint focus on access to justice. Both reports included an emphasis on the value of ensuring that AI tools enhance access to justice. The Texas Taskforce highlighted the need to support legal aid providers in obtaining access to AI. At the same time, the MSBA’s Assembly recommended the creation of an “Access to Justice Legal Sandbox” that “would provide a controlled environment for organizations to use LLMs in innovative ways, without the fear of UPL prosecution.”

Overall, the MSBA Assembly’s approach was more exploratory, while the Texas Taskforce’s was more advisory. The MSBA Assembly’s report included recommendations to take more detailed, actionable steps like creating an AI regulatory sandbox, launching pilot projects, and creating a Standing Committee to consider recommendations made in the report.  In comparison, the Texas Taskforce identified broader goals such as raising awareness of cybersecurity issues surrounding AI, emphasizing the importance of AI education and CLEs, and proposing AI implementation best practices.

The issuance of these reports on the tails of other bar association guidance represents a significant step forward for the legal profession. While we’ve historically resisted change, we’re now looking forward rather than backward. Bar associations are rising to the challenge during this period of rapid technological advancements and providing lawyers with much-needed, practical guidance and advice designed to help them navigate the ever-changing AI landscape.  

While Texas focuses on comprehensive guidelines and educational initiatives, Minnesota’s approach includes regulatory sandboxes and pilot projects. These differing strategies reflect a shared commitment to ensuring AI enhances access to justice and improves the lives of legal professionals. Together, these efforts indicate a profession that is, at long last, willing to adapt and innovate by leveraging emerging technologies to better serve society and uphold justice in an increasingly digital-first age.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase legal practice management software and LawPay payment processing, AffiniPay companies. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].

 


Pre-Trial AI Tools For Lawyers

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

Pre-Trial AI Tools For Lawyers

I often receive emails from lawyers who reach out after having read one of my articles about generative artificial intelligence (AI) tools. They frequently seek advice about implementing AI software in their firm. Some focus on ethics and accuracy concerns, while others ask for my input on which tools to use to address a workflow issue in their firm. These communications can sometimes inspire me to write articles about a specific type of AI software since it’s a safe bet that other lawyers may be struggling with the same issue.

Recently, many emails have focused on pre-trial AI tools for lawyers. This makes sense since AI tools can streamline many repeatable and tedious tasks involved during the discovery and motion stages of a case. 

Of course, both legalese and litigation processes are complex, which means that consumer-focused generative AI tools such as ChatGPT or Claude are often inadequate, producing less-than-ideal output. Fortunately, legal technology companies that thoroughly understand legal workflows and lawyers’ unique needs are much better positioned to develop tools that streamline pre-trial workflows and generate reliable and useful content.

Because AI can address many of the pain points encountered during the early stages of litigation, it’s no surprise that AI software tools have been released over the past year that can address many pre-litigation workflow challenges. 

Now that those products are available let’s review some of the top categories. Note that I have not tested out most of these tools and only provide information regarding available software. You must carefully vet the providers and take advantage of free trials and demos offered before settling on a tool. To assist with the vetting process, you’ll find a list of suggested questions to ask legal cloud and AI providers here. 

The first category of AI tools we’ll consider is pre-trial discovery management. This software automates the tedious and redundant process of preparing routine pleadings, discovery requests, and discovery responses. Upon uploading complaints and other legal documents, this software will typically generate responsive pleadings, such as answers, interrogatories, requests for admission, and document requests and responses. The following AI software can assist in drafting these types of documents: Legalmation, Ai.law, Briefpoint, and LexthinkAI. 

Next are AI tools for deposition summary and analysis. This software leverages AI algorithms to reduce the time spent reviewing and obtaining insights from deposition transcripts. Automating these tasks significantly streamlines the review process, allowing for more efficient case preparation and strategy development. Tools that offer this functionality include Legalmation, Lexthink.ai, Casemark, Lexis + AI, and  CoCounsel, a Thomson Reuters company.

Finally, there are AI tools that assist with brief drafting and analysis. AI technology is beneficial in this context since it can help edit text, improve writing, and change tone, reducing the time needed to write complex legal documents. These tools typically function within word processing tools such as Word. Products that assist with brief writing functions include Clearbrief, Briefcatch, EZBriefs, and Wordrake.

Pre-trial AI tools are more than document robots; they're powerful allies that reduce friction and enhance efficiency. Even if you’re not yet ready to invest in these tools at this early stage, it’s worthwhile to arm yourself with information about this category of software for future reference. They will undoubtedly increase in sophistication over time and have great potential. With these tools on your side, either now or down the road, you’ll be able to focus on crafting winning arguments while AI tackles tedious pre-trial tasks. The result? Less stress, happier lawyers and clients, and a future-proofed legal practice.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase legal practice management software and LawPaypayment processing, AffiniPay companies. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].

 

 

 

 

 

 

 


New Report Highlights GenAI Adoption Trends in Law

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

New Report Highlights GenAI Adoption Trends in Law

For legal professionals facing an ever-evolving technology landscape shaped by rapid advancements in artificial intelligence, data-driven decisions are the key to successful adaptation. Because change is occurring quickly, up-to-date information is key. That’s where the Thomson Reuters Institute 2024 Generative AI in Professional Services Report comes in.

This report highlights how professionals, including lawyers, view and use generative AI (GenAI). It offers insights into legal professionals’ attitudes and adoption rates and provides law firm leaders with timely industry data. Using this information, you can make informed choices about when and how to implement GenAI in your firm.

First, let’s consider legal professionals’ perspectives on GenAI. The report shows that only a slight majority view it as an appropriate tool for use in a law firm. 85% of legal professionals believe AI could be applied to their work, while only 51% say it should.

Data from the report also indicates that ethical concerns about the unauthorized practice of law could drive some of the reticence surrounding GenAI. The majority (77%) of legal respondents cited this issue as either a significant threat or somewhat of a threat to the profession.

Our judicial counterparts are even more cautious about incorporating GenAI into their workflows, with 60% having no current plans to use it and only 8% currently experimenting with it

Also notable is that legal-specific GenAI tools are not yet mainstream in our profession. According to the Report, only 12% of legal professionals report using legal-specific GenAI tools today, but 43% plan to do so within the next three years. In comparison, consumer GenAI tools are more popular presently, with 27% of legal industry respondents using them and another 20% planning to do so within the next three years. In other words, within a few years, the adoption of legal-specific tools will far outpace that of consumer tools in the legal space, and rightly so, since legal providers have a far better understanding of the unique needs of legal professionals.

For those currently using GenAI, top use cases in law firms currently include legal research, document review, brief or memo drafting, document summarization, and correspondence drafting.

Data from the report showed that compared to their law firm counterparts, corporate legal departments are more document-focused in their GenAI usage. Contract drafting comes in first, followed by document review, legal research, document summarization, and extracting contract data.

Similarly, government and court respondents also focused primarily on leveraging GenAI tools to work with documents. Use cases included legal research, document review, document summarization, brief or memo drafting, and contract drafting.

Another interesting data point from this report revolved around perspectives on shifting the cost of GenAI tools when used to provide legal services. According to the report, law firms report primarily absorbing GenAI investment costs as firm overhead (51%), with a smaller portion passing the costs to customers on a case-by-case basis (16%) or across the board (9%). 4% use other methods, and 20% have not yet determined their approach.

Alternative pricing for legal services was also discussed, with more than a third of respondents(39%) sharing that GenAI may result in an increase in the use of alternative fees. Even so, another 28% were unclear as to how GenAI adoption might impact law firm billing moving forward.

Last but not least, recruitment. 45% of legal professionals surveyed indicated that their firms do not plan to target applicants with AI or GenAI skills (45%), while 17% identified it as a "nice to have" skill. Only 2% said their firms would require it.

If you haven’t read this report, now’s the time. It provides valuable data that highlights the growing awareness of AI's potential impact on our profession, even though adoption rates vary. Many legal professionals see the value of AI but remain cautious about fully embracing it. 

The findings from this report offer valuable insights that can guide law firm leaders in making informed decisions about integrating AI into their firm’s workflows. As AI technology advances, insights like these will help you strategically decide when and how to implement GenAI, ultimately shaping the future of your law practice.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase legal practice management software and LawPaypayment processing, AffiniPay companies. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].

 


Balancing Innovation and Ethics: Kentucky Bar Association’s Preliminary Stance on AI for Lawyers

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

Balancing Innovation and Ethics: Kentucky Bar Association’s Preliminary Stance on AI for Lawyers

The rapid advancement of generative artificial intelligence (AI) technology has had many effects, one of which has been to spur bar association ethics committees into action. In less than two years, at least eight jurisdictions have issued AI guidance in one form or another, including California, Florida, New Jersey, Michigan, and New York, which I’ve covered in this column. 

Most recently, I discussed a joint opinion from the Pennsylvania Bar Association Committee on Legal Ethics and Responsibility and the Philadelphia Bar Association Professional Guidance Committee, Joint Formal Opinion 2024-200, and promised to subsequently tackle Kentucky’s the Kentucky Bar Association’s March opinion, Ethics Opinion KBA E-457, which I’ll cover today. 

This opinion was issued in March and was published to the KBA membership in the May/June edition of the Bench & Bar Magazine. After the 30-day public comment period expires, it will become final.

This opinion covers a wide range of issues, including technology competency, confidentiality, client billing, notification to courts and clients regarding AI usage, and the supervision of others in the firm who use AI. 

Notably, when providing the necessary context for the guidance provided, the Committee wisely acknowledged that hard and fast rules regarding AI adoption by law firms are inadvisable since the technology is advancing rapidly, and every law firm will use it in different, unique ways: “The Committee does not intend to specify what AI policy an attorney should follow because it is the responsibility of each attorney to best determine how AI will be used within their law firm and then to establish an AI policy that addresses the benefits and risks associated with AI products. The fact is that the speed of change in this area means that any specific recommendation will likely be obsolete from the moment of publication.”

Accordingly, the Committee’s advice was fairly elastic and designed to change with the times as AI technology improves. The Committee emphasized the importance of maintaining technology competency, which includes staying “abreast of the use of AI in the practice of law,” along with the corresponding duties to continually take steps to maintain client confidentiality and to carefully “review court rules and procedures as they relate to the use of AI, and to review all submissions to the Court that utilized Generative AI to confirm the accuracy of the content of those filings.”

As other bar associations have done, the Kentucky Bar Ethics Committee also highlighted the issues surrounding client communication and billing when using AI to streamline legal work. 

Departing from the hard and fast requirement that some bars have put in place regarding notifying clients whenever AI is used in their matter, the Committee took the more moderate approach. It required that lawyers do so only under certain circumstances. The Committee explained that there is no “ethical duty to disclose the rote use of AI generated research for a client's matter unless the work is being outsourced to a third party; the client is being charged for the cost of AI; and/or the disclosure of AI generated research is required by Court Rules.” 

Next, the Committee determined that when invoicing a client for work performed more efficiently when using AI, lawyers should “consider reducing the amount of attorney's fees being charged the client when appropriate under the circumstances.” Similarly, lawyers may pass on expenses related to AI software if there is an acknowledgment in writing whereby the client agrees in advance to reimburse the attorney for the attorney's expense in using AI.” However, the Committee cautioned that the “costs of AI training and keeping abreast of AI developments should not be charged to clients.”

Finally, the Committee confirmed that lawyers who are partners or managers have a duty to ensure the ethical use of AI by other lawyers and employees, which involves appropriate training and supervision.

This opinion provides a thorough analysis of the issues and sound advice regarding AI usage in law firms. I’ve only hit the high points, so make sure to read the entire opinion for the Committee’s more nuanced perspective, especially if you are a Kentucky attorney. AI is here to stay and will inevitably impact your practice, likely much sooner than you might expect, given the rapid change we’re now experiencing. Invest time into learning about this technology now, so you can adapt to the times and incorporate it into your law firm, ultimately providing your clients with more efficient and effective representation.

Nicole Black is a Rochester, New York attorney, author, journalist, and the Head of SME and External Education at MyCase legal practice management software and LawPaypayment processing, AffiniPay companies. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


More AI Ethics Guidance Arrives With Pennsylvania Weighing In

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

More AI Ethics Guidance Arrives With Pennsylvania Weighing In

The rate of technological change this year has been off the charts. Lately, there’s daily news of generative artificial intelligence (AI) announcements about new products, feature releases, or acquisitions. Advancement has been occurring at such a rapid clip that it’s more challenging than ever to keep up with the pace of change — blink, and you’ll miss it!

Given how quickly AI has infiltrated our lives and profession, it’s been all the more impressive to watch bar association professional disciplinary committees step up to the plate and issue timely, much-needed guidance. Even though generative AI has been around for less than two years, California, Florida, New Jersey, Michigan, and New York had already issued GenAI guidance for lawyers as of April 2024.

Just a few months later, two other states, Pennsylvania and Kentucky, have weighed in, providing lawyers in their jurisdictions with roadmaps for ethical AI usage. Today, I’ll discuss the Pennsylvania guidance and will cover Kentucky’s in my next article.

On May 22, the Pennsylvania Bar Association Committee on Legal Ethics and Responsibility and the Philadelphia Bar Association Professional Guidance Committee issued Joint Formal Opinion 2024-200. In the introduction to the opinion, the joint Committee explained why it is critical for lawyers to learn about AI: “This technology has begun to revolutionize the way legal work is done, allowing lawyers to focus on more complex tasks and provide better service to their clients…Now that it is here, attorneys need to know what it is and how (and if) to use it.” A key way to meet that requirement is to take advantage of “continuing education and training to stay informed about ethical issues and best practices for using AI in legal practice.”

The joint Committee emphasized the importance of understanding both the risks and benefits of incorporating AI into your firm’s workflows. It also stated that if used appropriately and “with appropriate safeguards, lawyers can utilize artificial intelligence” in a compliant manner. 

The opinion included many recommendations and requirements for lawyers planning to use AI in their practices. First and foremost, the Committees emphasized basic competence and the need to “ensure that AI-generated content is truthful, accurate, and based on sound legal reasoning.” This obligation requires lawyers to confirm “the accuracy and relevance of the citations they use in legal documents or arguments.” 

Another area of focus was on protecting client confidentiality. The joint Committee opined that lawyers must take steps to vet technology providers with the end goal being to “safeguard information relating to the representation of a client and ensure that AI systems handling confidential data adhere to strict confidentiality measures.”

Notably, the joint Committee highlighted the importance of ensuring that AI tools and their output are unbiased and accurate. This means that when researching a product and provider, steps must be taken to “ensure that the data used to train AI models is accurate, unbiased, and ethically sourced to prevent perpetuating biases or inaccuracies in AI-generated content.”

Transparency with clients was also discussed. Lawyers were cautioned to ensure clear communication “with clients about their use of AI technologies in their practices…(including) how such tools are employed and their potential impact on case outcomes.” Lawyers were also advised to clearly communicate with clients about AI-related expenses, which should be “reasonable and appropriately disclosed to clients.”

This guidance — emphasizing competence, confidentiality, and transparency —is a valuable resource for lawyers seeking to integrate AI into their practices. This timely advice helps ensure ethical AI usage in law firms, especially for Pennsylvania practitioners. For even more helpful ethics analysis, stay tuned for my next article, where we’ll examine Kentucky's recent AI guidance.

Nicole Black is a Rochester, New York attorney, author, journalist, and the Head of SME and External Education at MyCase legal practice management software and LawPaypayment processing, AffiniPay companies. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].