Daily Record--Legal Currents Column

No-Nonsense Guidance for Lawyers Still Confused About AI

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

No-Nonsense Guidance for Lawyers Still Confused About AI

I know, I know. You’re probably thinking, “Another column about generative artificial intelligence?” But guess what? This article isn’t about the latest artificial intelligence (AI) news and updates (which really are important, by the way). Instead, it’s a resource guide that will provide you with the information you need to understand AI, what it means for the future of your law practice, and how to make educated decisions about how and when to use it in your law firm. 

Generative AI advancements are happening incredibly quickly. Huge leaps in processing speed and power enable more sophisticated use cases and better response times. The improved functionality supports faster development, with new AI software and updates to existing tools rolling out weekly. 

The pace of change is breathtaking and unlike anything we’ve seen before. Even for those of us who watch the space carefully, it’s hard to keep up.

So what’s a lawyer to do? Learn as much as you can! If you’re not sure where to start, I’ve got you covered. Below, you’ll find resources that offer insights and guidance on generative AI basics along with how-tos from legal technology industry experts, including yours truly. 

So without further ado, let’s get to it. According to the AffiniPay 2025 Legal Industry Report, 76% of legal professionals reported that ethics issues were a top hurdle that slowed down AI adoption. Rather than let fears about compliance issues slow down your AI adoption, why not up your AI ethics game by ensuring your firm’s implementation crosses all the i’s and dots all the t’s?

To do this, look no further than Justia’s frequently updated 50-state survey of legal AI ethics rules. This site is a one-stop shop that lets you track ethics opinions and guidance across jurisdictions. This means that even if your state hasn’t weighed in yet, you can gain insights from legal experts across the country and make implementation decisions based on the latest guidance.

Once you’re clear on the ethical issues involved in AI adoption, the next step is to ensure you understand the technology and its implications for our profession and beyond. That’s where a newly published book authored by internationally renowned legal technology expert Richard Susskind, the author of the Future of Lawyers, comes in. In his latest book, “How to Think About AI: A Guide for the Perplexed,” he explains what AI is, what it might become, and why we need to think beyond today’s tools to understand its long-term impact on our lives and society. 

Next, let’s talk nuts and bolts. What AI tools should you implement in your firm? How do you actually use generative AI in your personal workflows? In an upcoming course on June 27th from 11 am - 3 pm ET, “AI Teach-In for Lawyers,” you’ll learn all that and more. This free hands-on course is offered by my co-author and good friend, Carolyn Elefant. It will include an overview of AI tools and hands-on demos from lawyers just like you who will show you how they’re putting generative AI to work in their law firms.

Finally, you’ll need to vet and choose generative AI tools for your law practice. My ABA Journal column is the perfect place to start. Most months, I write about a specific category of legal software, explaining what it is, how it can help your firm, issues to consider when choosing a tool, and an overview of the top options available in the marketplace. Since the rise of generative AI, many of my columns have focused on software that includes AI features, so this column is a great place to start your research into legal-specific AI software for your law firm.

You don’t have to chase every new AI tool, but ignoring AI isn’t a great strategy either. The technology landscape is changing fast, and the lawyers who prioritize understanding AI, including the ethical issues, use cases, and reputable legal tools available, will be better positioned to adapt and thrive. The resources I’ve shared here aren’t hype; they’re practical, credible, and built for lawyers like you. 

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


The legal profession falls behind on AI while other industries move ahead

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

The legal profession falls behind on AI while other industries move ahead

As artificial intelligence (AI) tools gain rapid traction across professional services, a clear pattern is emerging: law firms are dragging their feet. While other industries are integrating AI into daily workflows and experiencing significant benefits, recent data shows that our profession remains the outlier: cautious, skeptical, and slow to adapt. 

The 2025 Intapp Technology Perceptions Report highlights this divide with data that serves as a wake-up call. Eight hundred twenty people were surveyed across four professions: legal, accounting, consulting, and finance. One hundred seventy legal professionals weighed in, representing 21% of all respondents. 

From the survey data, a clear pattern emerged: the legal segment had the lowest adoption levels and was the most cautious in its approach to AI. For example, slightly more than half (55%) of legal respondents were using AI at work. In comparison, 89% of survey respondents in finance were using it, followed by 73% in accounting, and 68% in consulting. 

The data also showed that twenty-five percent of legal professionals reported that their firms had no plans to adopt AI, which was twice the rate reported by accountants, with even lower rates among consulting and finance firms. Similarly, law firms lagged behind in AI adoption as well. Only 39% were already using AI, compared to 57% in accounting, 54% in consulting, and 71% in finance.

The legal profession’s failure to adopt AI is unfortunate, considering the positive outcomes experienced by survey respondents overall. Across industries, more than half of professionals report that their firms are using AI successfully to boost productivity, innovation, and creativity. Many are reallocating time saved with AI to focus on higher-level client work (42%) and strategic planning (33%). The time savings reported were tangible: 38% of professionals say AI saves them 3–5 hours per week, 18% save 6–10 hours, and 8% save more than 10 hours. Notably, 82% of professionals say the quality of AI-generated work is at least as good as their own.

The data shows that these other professions are reaping the benefits of AI. Overall, 38% of professionals surveyed reported saving 3–5 hours per week, 18% save 6–10 hours, and 8% save more than 10 hours. They’re using that newfound time to, among other things, improve their work-life balance (48%), focus on high-level client work (42%), engage in strategic planning (33%), build client relationships (28%), increase billable hours (24%), and pursue new business opportunities (23%). 

In other words, other industries are experiencing noticeable AI adoption gains, while the legal profession lags behind. The impact of its continued reticence will only increase exponentially as the technology improves and is adopted across the business world. In the long run, the legal profession’s overly cautious approach to AI will prove unnecessarily costly and detrimental to client service and business goals. 

Generally speaking, there are many reasons offered for this slower rate of AI adoption. Some claim the technology is too new or too flawed to trust, but those concerns are a red herring. For years now, AI has been embedded in the tools regularly used by legal professionals, including word processing software. Think predictive text, Grammarly, and document automation. 

As AI has improved, the biggest change has been visibility and scale, not function. Whether the AI tools and output are simple or complex, intelligent follow-up has always been required. Errors aren’t a reason to reject the tools; they’re a reason to use them responsibly.

What the legal profession claims is caution is really complacency. While lawyers debate hypotheticals, peers in finance, consulting, and accounting are saving hours, focusing on strategy, and increasing efficiency. Being overly cautious won’t cut it anymore.

Failing to engage with AI now isn’t prudent; it’s shortsighted, and it’s costing firms more than they realize. The message is clear: AI isn’t a distant threat or a passing trend—it’s a present advantage. Our profession can either catch up or risk falling irreparably behind.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


New Hampshire Ethics Committee On Drafting Legal Documents with AI

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

New Hampshire Ethics Committee On Drafting Legal Documents with AI

Despite repeated warnings from ethics committees, lawyers continue to make headlines for citing fake cases in legal briefs that were created by generative artificial intelligence. Most recently, attorneys from some of the largest law firms have been guilty of this transgression. 

To be clear, these errors occurred because lawyers failed to review filings before verifying them and submitting them to the court. Regardless of who–-or what—prepares a document, you have an ethical obligation to carefully review it for accuracy. 

So why does this keep happening? One key reason is that document drafting is an obvious legal use case for generative AI. According to the 2025 AffiniPay Legal Industry Report, 40% of the respondents who had adopted generative AI in their practices have used it to draft documents.

And with good reason. These tools can write initial drafts of legal documents in seconds, providing a foundational starting point for a more complex, carefully reviewed final draft. The problem is that many lawyers skip the second part, failing to confirm the accuracy of the draft.

Ethics opinions have addressed the obligations at play when lawyers incorporate generative AI into their legal workflows, but are there any unique issues to consider when using this technology for document creation? This question was recently posed to the New Hampshire Bar Association’s Ethics Committee, which responded in an “Ethics Corner Article.” 

Before answering the query, the Committee pointed out the obvious—that AI-assisted document creation is not a new concept: “The truth is, lawyers have been using artificial intelligence…to help us draft documents, e-mails, and correspondence, for some time.” Predictive text is one example, as are tools that correct grammar, such as Grammarly.

Next, the Committee explained that since embedded enterprise-grade AI is still in its infancy, it would focus on the ethical issues surrounding the application of consumer-grade AI tools like ChatGPT and Perplexity to legal work. 

The Committee reiterated what was discussed above: these tools can produce errors, and as part of the duties to provide competent representation and candor to a tribunal, “lawyers must verify the authenticity of any information produced,” whether it’s legal research or a document.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


Courts Are Facing an AI Tidal Wave

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

Courts Are Facing an AI Tidal Wave

If you haven’t noticed that artificial intelligence (AI) is already impacting the practice of law, then you’re not paying attention. Case in point: Judges are increasingly facing decisions about the admission of AI-created evidence at trial, and this is just the beginning. Courts will face a tsunami of AI-generated evidence, and I’m not sure we’re prepared for what’s coming.

That this is occurring isn’t surprising. The justice system has grappled with digital evidence for decades now. The first big change occurred in 2006 when the Federal Rules of Civil Procedure were amended to explicitly address electronic discovery. Another major trend was social media, which had a significant impact on trials as lawyers and judges struggled to keep up with the rapid influx of evidence mined from new and emerging social media sites.   

AI is the next wave of technology that will leave its mark on trials. The difference is that it’s occurring at a much faster rate, and it has the potential to have far more dramatic impact on the administration of justice than any of the technologies that preceded it.

It’s already cropping up in courts across the country in many different contexts, and this is just the beginning. 

Most recently, earlier this month, at an Arizona sentencing, the deceased victim of a road rage incident provided a victim impact statement from the victim in the form of an AI-generated video. The dialogue was written by his sister. The judge permitted the statement and considered it when sentencing he defendant to the maximum, which exceeded the recommendation of the prosecution.

In April, at an argument in front of the New York State Supreme Court Appellate Division’s First Judicial Department, a pro se litigant whose voice had been impacted by throat cancer attempted to argue his appeal via a video of an AI-generated lawyer. The Court quickly put a stop to it and ordered the defendant to proceed on his own.

In another case, State of Washington v. Puloka, the defense sought to introduce AI-enhanced videos of a bystander’s iPhone video taken of the alleged criminal incident in question. The judge declined to admit the videos into evidence, deeming them unduly prejudicial.

In a California case where the plaintiff was killed while riding in his Tesla set to auto-drive, his family sought to introduce a video of Elon Musk discussing the safety of that feature. The defense objected, claiming that the video could have been a deepfake. The court rejected that challenge.

In January, a Florida judge donned a VR headset in a criminal case in the 17th Judicial Circuit. The headset was provided by the defense in a stand-your-ground case and allowed the judge to “view” the wedding reception where the alleged assault occurred from the “perspective” of the defendant. 

Finally, another notable intersection of judges and AI occurred just last week. It was reported that a newly elected Broward County Judge, Lauren Peffer, faces a judicial ethics complaint alleging that she promoted an AI-generated audio recording during her campaign that falsely depicted her opponent making sexually explicit and inappropriate remarks.

In other words, AI isn’t coming—it’s already here, and it’s changing how evidence is created, presented, and evaluated in courtrooms across the country. From deepfakes to virtual reality to synthetic speech, these tools are influencing decisions in ways we’ve never seen before. 

Ready or not, the volume and complexity of AI-generated evidence will only grow. To maintain the integrity of our justice system, we need to start thinking now about how to address it, both practically and ethically. Otherwise, we risk letting untested, unreliable, or manipulative AI-generated outputs undermine the very system we rely on to deliver fair and just outcomes. 

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].

 

 


From Hype to Habits: Comparing Data on Generative AI in Law Firms

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

From Hype to Habits: Comparing Data on Generative AI in Law Firms

There’s a lot of noise about lawyers using generative artificial intelligence (AI)—and not always appropriately—but how is it really being used in law firms? Are legal professionals embracing it, or are they taking a “wait and see” approach? For those who’ve jumped in feet first, has the investment paid off, or is it a waste of time and money?

If you’re as curious about these issues as I am, you’re in luck. Since generative AI was first publicly released over two years ago, a litany of reports has been released that provide insight into how law firms are approaching it and the changing perspectives on its benefits and risks.

Most recently, Thomson Reuters released the 2025 Generative AI in Professional Services Report in mid-April. This report provides insight into how and why professionals, including lawyers, accountants, and government agencies, are integrating generative AI into their workflows. When its law firm statistics are compared to similar data points from another recent report, the 2025 AffiniPay Legal Industry Report, interesting adoption trends and perceptions about this emerging technology are revealed. 

For example, the Thomson Reuters report shows that 28% of law firms are already using generative AI, with another 14% planning to start in the near future. Meanwhile, 36% are still just researching it, and 22% say they have no plans to use it at all. Compare that to the AffiniPay report, where only 21% of firms say they’re using legal-specific generative AI tools. Just 3% said they’ll never use it, which suggests most firms are at least open to the idea, even if they haven’t jumped in yet. Bottom line: interest is there, but a lot of firms are still figuring out how (or if) generative AI fits into their practice.

When it comes to how often firms are actually using it, the numbers are even more interesting. According to Thomson Reuters, only 12% of users say they’re tapping into generative AI multiple times a day, and 21% use it daily. Most fall into the “weekly” category at 36%, and 6% use it only occasionally. The AffiniPay data suggests a higher frequency of regular usage: 45% of individual users say they use it every day, and 40% weekly. That’s a noticeable jump, and it could mean that once firms commit to it, there is enough value to make the technology a regular part of workflows.

Next, let’s take a look at how legal professionals are using generative AI. The two reports show some overlap but also some telling differences. In the Thomson Reuters report, which surveyed a significantly higher base of large firm users, the top legal use cases are document review (77%), legal research (74%), and summarizing documents (74%), followed closely by drafting briefs or memos (59%) and contract drafting (58%). In contrast, the AffiniPay report shows a broader range of lighter-touch use cases: 54% use it to draft correspondence, 47% for brainstorming, and 46% for general research. Fewer use it for core legal tasks—only 40% say they draft documents with it, and just 38% use it for legal research. The takeaway? While some larger firm lawyers are leaning into high-impact legal work, legal professionals from smaller firms are still testing the waters with lower-stakes tasks.

Even without a one-to-one comparison of data from the AffiniPay report, several statistics from the Thomson Reuters report stand out. Overall, support for generative AI in legal work is growing: 59% said it should be applied to legal tasks (up from 51% last year), and 89% said it can be. 

But that optimism doesn’t mean firms have everything figured out. Over half (53%) said generative AI won’t impact billing rates, while 20% expect rates to rise, likely reflecting added value rather than cost savings. Only 7% expect rates to drop, which is surprising given the time savings often associated with the technology. Apparently, a small minority of firms have no interest in passing that windfall on to clients. 

On the cost side, 42% plan to absorb generative AI-related expenses, while 30% will pass at least some costs on to clients, again highlighting a tendency to favor profit over client savings.

Finally, how important is generative AI knowledge in the hiring process? Despite growing use, only 21% of firms see generative AI knowledge as a “nice to have,” and 34% say it’s not required at all; the rest are unsure or have no plans to hire new staff in 2025. These viewpoints suggest firms still see generative AI as a business tool rather than a core competency for legal professionals.

No matter how you slice the data, one thing is clear: law firms are moving fast to incorporate generative AI into their tech stacks. Some are all in, others are still testing the waters, but across both reports, it’s clear the technology is shifting from buzzword territory to an everyday tool. While there is still uncertainty around pricing, impact, and training, that hasn’t slowed momentum. Generative AI is already changing how legal work gets done, and adoption is occurring at a much faster rate than for the technologies that preceded it.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


Should Using AI Mean Lower Fees? Virginia Ethics Committee Weighs In

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

Should Using AI Mean Lower Fees? Virginia Ethics Committee Weighs In

When legal professionals experiment with new technologies, knee-jerk reactions from ethics committees often follow. Generative artificial intelligence (AI) is no exception. Like the technologies that preceded it, lawyers seeking to use AI in their practices have faced unnecessary restrictions or requirements.

One example is the duty to advise clients of its use, which is included in many ethics opinions on AI. This requirement rears its ugly head whenever technologies are novel, but is ultimately abandoned when they become ubiquitous. This same evolution will occur with AI.

Another common issue that crops up is how to properly and ethically bill clients for legal services when efficiency gains arise from the incorporation of new technologies like AI into legal workflows.

This issue was addressed most recently in March in Proposed Legal Opinion 1901 from the Virginia Bar. The proposed opinion, which was released pending public comment, was devoted to determining the reasonableness of legal fees when generative AI was used to provide legal services. 

The opinion was surprisingly nuanced, and the approach was thoughtful and balanced. For starters, The Legal Ethics Committee wisely acknowledged that its conclusions were not limited to AI usage: “Though this opinion is specifically addressing productivity improvements generated through the use of generative AI, its principles may be equally applicable to a lawyer’s use of other technological tools that result in comparable productivity improvements.”

The Committee explained that the time saved from AI efficiency gains does not automatically require lawyers to reduce their fees. Instead, in addition to the actual legal work, the knowledge required to effectively evaluate and apply AI tools has value: “(T)he ‘skill requisite to perform the legal service properly’ might actually increase…The lawyer's judgment in determining when and how to deploy AI tools, and the expertise needed to critically evaluate AI-generated content, represent valuable services for which the lawyer reasonably can be compensated.”

Notably, the Committee disagreed with the conclusion reached in ABA Formal Opinion 512—that “it may be unreasonable under Rule 1.5 for the lawyer to charge the same flat fee when using the GAI tool as when not using it.” 

Instead, the Committee determined that the result of AI-driven productivity gains should not effectively penalize lawyers by reducing flat fees. Pursuant to Rule 1.5(b), all legal fees must be reasonable, “but the time spent on a task or the use of certain research or drafting tools should not be read as the preeminent or determinative factor in that analysis.” Lawyers should not be required to forfeit reasonable profit “if clients continue to receive value from the lawyer’s output.”

However, Rule 1.5(b) requires lawyers to adequately explain the cost of legal work, including value-based fees, to clients. According to the Committee, if an AI tool significantly increases efficiency, it may be necessary to offer clients additional context about the legal work provided:  “(T)he client may need additional explanation of why the lawyer’s experience, technical skills, or other efficiencies contribute to the value of the services and determination of the fee.”

I agree with the Committee’s approach. Ethics rules shouldn’t reward inefficiency or punish lawyers for using the right tools. If a lawyer’s expertise includes knowing when—and how—to apply AI effectively, that judgment has value. Clients deserve transparency, but efficiency shouldn’t come at the cost of fair compensation. 

The Virginia opinion reflects a familiar pattern in legal ethics: New technologies often prompt early overcorrections that fade once the tools become widely accepted. Generative AI is no different, and this opinion moves us closer toward treating it that way.

If you’re a Virginia legal professional, don’t overlook this proposed opinion—it’s thoughtful, relevant, and worth your time. Public comments are open to all through May 7, 2025: [email protected]. This is your chance to be heard if you have strong opinions about these issues.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].

 


Oregon’s AI Ethics Opinion: A Wake-Up Call for Lawyers

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

Oregon’s AI Ethics Opinion: A Wake-Up Call for Lawyers

In February, the Oregon Board of Governors approved Formal Opinion 2024-205, which addresses how Oregon lawyers can ethically use artificial intelligence (AI) and generative AI in their practices. 

The opening line of the opinion is notable: “Artificial intelligence tools have become widely available for use by lawyers. AI has been incorporated into a multitude of products frequently used by lawyers, such as word processing applications, communication tools, and research databases.” While that conclusion may be true today, it’s a relatively recent development—ChatGPT-3.5 was only publicly released at the end of November 2022, and AI was rarely used in legal software until approximately 2015, when it began appearing more often in legal research, contract analysis, and litigation analytics tools. 

This recent trend of increased AI adoption by legal professionals has resulted in an extraordinarily rapid response by ethics committees. Since 2023, more than 15 jurisdictions, including the American Bar Association, Florida, New York, Texas, Pennsylvania, and North Carolina, have issued ethics opinions addressing AI use by lawyers. Oregon adds to this growing body of guidance.

The Oregon opinion’s guidance aligns closely with the conclusions reached in ABA Formal Opinion 512 (2024) and addresses key ethical issues, including competence, confidentiality, supervision, billing, and candor to the court.  

Tackling competence, the Oregon Legal Ethics Committee explained: “(AI) competence requires understanding the benefits and risks associated with the specific use and type of AI being used,” and the obligation is ongoing.

Next, the Committee considered client disclosure, explaining that Oregon lawyers may be required to disclose AI use to clients. The decision to do so needs to be made on a case-by-case basis and factors to consider include “the type of case, similarities to and deviations from technology typically used, novelty of the technology, risks to client data, risks that incorrect information will be included in the lawyer’s work product, sophistication of the client, deviation from explicit client instructions or reasonable expectations, the scope of representation, the extent of the lawyer’s reliance on the technology, the existence of safeguards present in the technology and independently implemented by the lawyer, and whether the use of AI or other new technology would have a significant impact on attorney fees or is a cost passed on to the client.”

Turning to fees, the Committee joined many other jurisdictions in determining that lawyers may only charge clients for reasonable time spent using AI for “case-specific research and drafting” and cannot bill for time that would have been spent on the case but for the implementation of AI tools. Billing for time spent learning how to use AI may only occur with the client’s consent. If a firm intends to invoice clients for the cost of AI tools, clients must be informed, preferably in writing, and if a lawyer is unable to determine the actual costs of a specialized AI tool used in a client matter, prorated cost billing is impermissible in Oregon and the charges should be treated as overhead instead.

To protect client confidentiality, lawyers seeking to input confidential information into an “open” model, which allows the input to train the AI system, must obtain consent from their client. The Committee cautioned that even when using a “closed” AI tool that does not use input to train the model, lawyers must carefully vet providers to ensure that vendor contracts address how data is protected, including how it will be handled, encrypted, stored, and eventually destroyed.  According to the Committee, even when using a closed AI model, it may be appropriate “to anonymize or redact certain information that (clients deem) sensitive or that could create a risk of harm…”

Next, the Committee opined that managerial and supervisory obligations require firms to have policies in place that provide clear guidelines on permissible AI use by all lawyers and staff. Additionally, the Committee confirmed that lawyers must carefully review the accuracy of both their own AI-assisted work product and that prepared by subordinate lawyers and nonlawyers.

Finally, the Committee confirmed that Oregon lawyers must be aware of and comply with all court orders regarding AI disclosure. Additionally, they are required to carefully review and verify the accuracy of AI output, including case citations. Should an attorney discover that a court filing includes a false statement of fact or law, they must notify the court and correct the error, taking care to avoid disclosing client confidences.

For Oregon attorneys, this opinion is a “must read,” just as it is for lawyers in jurisdictions that have not weighed in on these issues. Regardless of your licensure, the release of this opinion, along with more than 15 others in such a short period of time, should be a wake-up call. The pace of change isn’t slowing. If you haven’t started learning about AI, now is the time. The technology is advancing quickly; failing to learn about it now will only make it harder to catch up. 

These opinions aren’t just academic—they’re a warning. To make informed, responsible decisions about how and when to use AI, lawyers need to start paying attention today.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


10 Practical Ways for Legal Professionals to Start Using Generative AI Today

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

10 Practical Ways for Legal Professionals to Start Using Generative AI Today

Last week, I was in Chicago for the ABA Techshow. This was my 16th year attending, and while there, I presented a number of times, including “Top 60 AI Use Cases in 60 Minutes.” During this session, Greg Siskind and I offered a rapid-fire roundup of six different ways that legal professionals could use generative artificial intelligence (AI) to streamline workflows and law firm operations.

During our talk, we discussed specific, hands-on ways lawyers can start using generative AI in their firms to improve legal and business process workflows. In many cases, consumer-generative AI software like ChatGPT can be used; in others, legal-specific tools are needed to protect confidentiality and appropriately analyze and leverage law firm data.

Below, I’ll share ten of the tips that I covered that can be accomplished using generally available tools like ChatGPT and Anthropic’s Claude. You can find a complete list of the sixty tips, which includes suggested prompts for each one, here.

The first tip is to use generative AI to assist you in writing better prompts for AI tools. A clear prompt leads to better results, which saves time and reduces frustration. This makes it easier to get meaningful and useful output.

Next, save time by using generative AI to assist in drafting law firm newsletters. Keeping clients updated on firm news and legal developments takes time. Generative AI can streamline newsletter creation, making it easier to stay in touch with clients, freeing up more time for billable work.

AI is also useful for brainstorming social media post ideas. Posting engaging content regularly is no easy task. Generative AI can reduce the lift by providing a steady flow of post ideas, ensuring a more consistent and effective online presence.

Another area where generative AI can assist is writing website content. Your firm’s website must clearly and accurately describe your firm and its practice areas. Generative AI can quickly draft relevant content that reflects your firm’s tone and values, making it easier to keep the site up-to-date and client-focused.

Performance reviews are another area where generative AI is useful. It can help set clear expectations within your firm by creating performance review criteria, supporting fair evaluations, and ensuring that everyone in your firm fully understands how their efforts are being measured.

AI can also improve client intake forms and reduce the time required to create them. By drafting intake forms that ensure consistent collection of necessary information for each case, onboarding is streamlined, reducing unnecessary back-and-forth with clients.

Generative AI is also helpful for reviewing internal workflows. It effectively can serve as a law firm consultant, reviewing how firm processes occur and offering tips to increase efficiency. Use it to establish step-by-step workflows that reduce wasted time and streamline firm operations.

Next up is language translation. Clear communication with clients is essential, especially when there’s a language barrier. By using generative AI during client consultations, you can quickly and easily translate conversations in real-time, helping you understand clients on the fly and making legal services more accessible.

The tone of communication matters, too. AI can revise emails or documents so that they are more empathetic, more formal, or more direct—depending on your needs. Oftentimes you’ll find this functionality will be built into tools you already use, like law practice management software. Lawyers are good at solving client problems but aren’t always effective at communicating outcomes empathetically. This use case solves that problem and increases the effectiveness of client communication.

Finally, jury selection is another area where AI can help. It can be used to brainstorm voir dire questions tailored to the case to uncover bias or a lack of receptiveness to your client’s position. Honing in on a specific issue and seeking ideas for ways to address it can often provide you with creative and effective approaches for voir dire.

Those are just a sampling of the use cases covered in our presentation. Make sure to check out the full list for all sixty tips. You’re sure to discover a few ideas that will increase efficiency and productivity both personally and firmwide.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


Be curious and adapt–or be left behind

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

Be curious and adapt–or be left behind

It is not the most intellectual of the species that survives; it is not the strongest that survives; but the species that survives is the one that is able to adapt to and to adjust best to the changing environment in which it finds itself.

–Leon C. Megginson

For many years, legal professionals had the relative luxury of disregarding technology even as it advanced at unprecedented rates. Ignoring it was unwise and arguably a failure of ethical obligations of competence, but it was nevertheless possible. You could remain oblivious without serious ramifications.

In 2012, avoiding technology became a professional liability when the duty of technology competence was added to Comment 8 of ABA Model Rule 1.1 by the ABA House of Delegates. Revised Comment 8 now states: “To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology, engage in continuing study and education, and comply with all continuing legal education requirements to which the lawyer is subject.”

Since that time, at least 40 U.S. states, including New York, have formally adopted the duty of technology competence in their versions of Rule 1.1.

Right around that time, the pace of technological advancement began to accelerate. Before 2012, Moore’s Law—which predicted computing power would double roughly every two years—held true. This steady, predictable level of change made it easy to overlook technological progress. However, that period of gradual evolution ended just as Rule 1.1 was updated, ushering in an era of rapid innovation.

At the same time, the demand for computing power to support artificial intelligence (AI) skyrocketed, doubling approximately every 3-4 months—far outpacing Moore’s Law. By 2019-2020, breakthroughs like OpenAI’s GPT-3 pushed AI beyond narrow applications, enabling automation and decision-making at a never-before-seen scale.

Then came 2022. With the launch of ChatGPT and other Generative AI tools, AI became not only more powerful but widely accessible. The rate of change continued to accelerate at a breakneck pace.

The end result is that today, lawyers no longer have the luxury of gradual adaptation. Falling behind means losing ground to those who embrace technology. Or, as is oft-repeated in legal technology circles, “Lawyers won’t be replaced by AI; lawyers who use AI will replace lawyers who don’t.” In other words, failing to adopt AI into your firm will ensure the loss of your competitive advantage.

Why? Because clients expect efficiency. Courts are digitizing. And, AI is impacting law firm workflows, from business processes and law firm management to legal research and contract review. Firms relying on AI tools will significantly increase productivity and reduce the number of new hires, ultimately increasing profitability. 

AI-driven efficiency gains will enable firms to price legal services more competitively. In lieu of the billable hour, alternative fee models can be more easily implemented. Generative AI integrated into legal billing or practice management platforms can analyze law firm data and assist in determining competitive, profitable flat fees for matters, offering legal clients the cost predictability they prefer.

The bottom line: the days of gradual technological advancements are long gone. The pace of change won’t slow down, and neither can you. Firmwide technology education and adoption must be a priority, and innovative client service must take precedence.

In 2025, keeping up with technology is no longer optional—it’s a professional survival skill. The question isn’t whether AI will impact your practice but when and whether you’ll be ready when it does.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].

 


The Legal Tech Divide: How Firm Size Determines Work and Innovation Trends

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

The Legal Tech Divide: How Firm Size Determines Work and Innovation Trends

How does your firm compare when it comes to AI implementation, hybrid work, and finance management software adoption? Not sure? Then check out the 2025 AffiniPay Legal Industry Report, which was released earlier this week. This report is based on responses from over 2,800 legal professionals, and the data highlights notable differences in how firms of various sizes are approaching technology adoption and workplace flexibility. 

The data shows that while large firms and solo practitioners are embracing AI and hybrid work, smaller and mid-sized firms are sticking with more traditional in-office approaches and slower tech adoption. These differences reflect the practical realities of running a law firm in 2024, with firm size impacting a range of issues from remote work policies to software implementation.

First, let’s take a look at work arrangements. This data is a good example of this divide, showing that while only 28% of firms overall require full-time in-office work, smaller firms are the most likely to enforce it. 36% of firms with 2 to 5 lawyers mandate in-office attendance for all employees, the highest of any group. 

In comparison, solo practitioners prefer remote work, with 31% operating virtually—a number far above the overall average of 19%. For solos, remote work isn’t just about flexibility; it’s often a cost-saving measure that eliminates the need for expensive office space.

Mid-sized and large firms are also embracing hybrid models. Firms with 6 to 20 lawyers show a strong preference for hybrid schedules, with 32% implementing them for some employees—nearly double the overall average. Firms with 21 to 50 lawyers follow suit, with 36% offering hybrid work for some staff and 28% extending it to everyone. 

The data shows that the largest shift, however, is occurring at large firms. Only a small number of respondents worked in firms with 51 or more lawyers, but of that subset, 61% of those firms offered hybrid schedules for all employees, nearly three times the overall average. However, fully remote and fully in-office arrangements are rare in this group, with only 6% of firms choosing either extreme. 

Technology adoption follows a similar pattern. Large firms are moving faster when it comes to legal-specific generative AI tools. According to the survey data, 39% of firms with 51 or more lawyers report using legal-specific generative AI, compared to approximately 20% with 50 or fewer lawyers, regardless of firm size. One reason for this large gap is likely because large firms already have costly major legal research platforms in place, most of which are rapidly integrating AI features.

Payroll software adoption rates are also impacted by firm size. While one-third of all legal professionals surveyed use payroll software, its adoption proves to be far more common in larger firms. According to the data, only 23% of solo practitioners use payroll software, likely because they have few, if any, employees. But at firms with 51 or more lawyers, 50% use payroll software, likely due to the increased administrative complexity inherent in larger firms. 

No matter how you look at it, the survey data makes one thing clear: firm size isn’t just about headcount. It impacts workflows, software choices, and office footprints. Large firms are leading the way in AI and hybrid work, while smaller and mid-size firms continue to favor in-office setups and take a more cautious approach to new technology. Solo practitioners fall somewhere in between. Whether it’s flexibility, automation, or AI-driven efficiency, firms of all sizes are making choices that reflect their unique needs. 

Where does your firm fit in? To obtain even more benchmark data, including insights by practice area, check out the full report for statistics on issues ranging from time tracking and billing software implementation to preferences about virtual court proceedings.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].