ethics

Should Using AI Mean Lower Fees? Virginia Ethics Committee Weighs In

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

Should Using AI Mean Lower Fees? Virginia Ethics Committee Weighs In

When legal professionals experiment with new technologies, knee-jerk reactions from ethics committees often follow. Generative artificial intelligence (AI) is no exception. Like the technologies that preceded it, lawyers seeking to use AI in their practices have faced unnecessary restrictions or requirements.

One example is the duty to advise clients of its use, which is included in many ethics opinions on AI. This requirement rears its ugly head whenever technologies are novel, but is ultimately abandoned when they become ubiquitous. This same evolution will occur with AI.

Another common issue that crops up is how to properly and ethically bill clients for legal services when efficiency gains arise from the incorporation of new technologies like AI into legal workflows.

This issue was addressed most recently in March in Proposed Legal Opinion 1901 from the Virginia Bar. The proposed opinion, which was released pending public comment, was devoted to determining the reasonableness of legal fees when generative AI was used to provide legal services. 

The opinion was surprisingly nuanced, and the approach was thoughtful and balanced. For starters, The Legal Ethics Committee wisely acknowledged that its conclusions were not limited to AI usage: “Though this opinion is specifically addressing productivity improvements generated through the use of generative AI, its principles may be equally applicable to a lawyer’s use of other technological tools that result in comparable productivity improvements.”

The Committee explained that the time saved from AI efficiency gains does not automatically require lawyers to reduce their fees. Instead, in addition to the actual legal work, the knowledge required to effectively evaluate and apply AI tools has value: “(T)he ‘skill requisite to perform the legal service properly’ might actually increase…The lawyer's judgment in determining when and how to deploy AI tools, and the expertise needed to critically evaluate AI-generated content, represent valuable services for which the lawyer reasonably can be compensated.”

Notably, the Committee disagreed with the conclusion reached in ABA Formal Opinion 512—that “it may be unreasonable under Rule 1.5 for the lawyer to charge the same flat fee when using the GAI tool as when not using it.” 

Instead, the Committee determined that the result of AI-driven productivity gains should not effectively penalize lawyers by reducing flat fees. Pursuant to Rule 1.5(b), all legal fees must be reasonable, “but the time spent on a task or the use of certain research or drafting tools should not be read as the preeminent or determinative factor in that analysis.” Lawyers should not be required to forfeit reasonable profit “if clients continue to receive value from the lawyer’s output.”

However, Rule 1.5(b) requires lawyers to adequately explain the cost of legal work, including value-based fees, to clients. According to the Committee, if an AI tool significantly increases efficiency, it may be necessary to offer clients additional context about the legal work provided:  “(T)he client may need additional explanation of why the lawyer’s experience, technical skills, or other efficiencies contribute to the value of the services and determination of the fee.”

I agree with the Committee’s approach. Ethics rules shouldn’t reward inefficiency or punish lawyers for using the right tools. If a lawyer’s expertise includes knowing when—and how—to apply AI effectively, that judgment has value. Clients deserve transparency, but efficiency shouldn’t come at the cost of fair compensation. 

The Virginia opinion reflects a familiar pattern in legal ethics: New technologies often prompt early overcorrections that fade once the tools become widely accepted. Generative AI is no different, and this opinion moves us closer toward treating it that way.

If you’re a Virginia legal professional, don’t overlook this proposed opinion—it’s thoughtful, relevant, and worth your time. Public comments are open to all through May 7, 2025: [email protected]. This is your chance to be heard if you have strong opinions about these issues.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].

 


Oregon’s AI Ethics Opinion: A Wake-Up Call for Lawyers

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

Oregon’s AI Ethics Opinion: A Wake-Up Call for Lawyers

In February, the Oregon Board of Governors approved Formal Opinion 2024-205, which addresses how Oregon lawyers can ethically use artificial intelligence (AI) and generative AI in their practices. 

The opening line of the opinion is notable: “Artificial intelligence tools have become widely available for use by lawyers. AI has been incorporated into a multitude of products frequently used by lawyers, such as word processing applications, communication tools, and research databases.” While that conclusion may be true today, it’s a relatively recent development—ChatGPT-3.5 was only publicly released at the end of November 2022, and AI was rarely used in legal software until approximately 2015, when it began appearing more often in legal research, contract analysis, and litigation analytics tools. 

This recent trend of increased AI adoption by legal professionals has resulted in an extraordinarily rapid response by ethics committees. Since 2023, more than 15 jurisdictions, including the American Bar Association, Florida, New York, Texas, Pennsylvania, and North Carolina, have issued ethics opinions addressing AI use by lawyers. Oregon adds to this growing body of guidance.

The Oregon opinion’s guidance aligns closely with the conclusions reached in ABA Formal Opinion 512 (2024) and addresses key ethical issues, including competence, confidentiality, supervision, billing, and candor to the court.  

Tackling competence, the Oregon Legal Ethics Committee explained: “(AI) competence requires understanding the benefits and risks associated with the specific use and type of AI being used,” and the obligation is ongoing.

Next, the Committee considered client disclosure, explaining that Oregon lawyers may be required to disclose AI use to clients. The decision to do so needs to be made on a case-by-case basis and factors to consider include “the type of case, similarities to and deviations from technology typically used, novelty of the technology, risks to client data, risks that incorrect information will be included in the lawyer’s work product, sophistication of the client, deviation from explicit client instructions or reasonable expectations, the scope of representation, the extent of the lawyer’s reliance on the technology, the existence of safeguards present in the technology and independently implemented by the lawyer, and whether the use of AI or other new technology would have a significant impact on attorney fees or is a cost passed on to the client.”

Turning to fees, the Committee joined many other jurisdictions in determining that lawyers may only charge clients for reasonable time spent using AI for “case-specific research and drafting” and cannot bill for time that would have been spent on the case but for the implementation of AI tools. Billing for time spent learning how to use AI may only occur with the client’s consent. If a firm intends to invoice clients for the cost of AI tools, clients must be informed, preferably in writing, and if a lawyer is unable to determine the actual costs of a specialized AI tool used in a client matter, prorated cost billing is impermissible in Oregon and the charges should be treated as overhead instead.

To protect client confidentiality, lawyers seeking to input confidential information into an “open” model, which allows the input to train the AI system, must obtain consent from their client. The Committee cautioned that even when using a “closed” AI tool that does not use input to train the model, lawyers must carefully vet providers to ensure that vendor contracts address how data is protected, including how it will be handled, encrypted, stored, and eventually destroyed.  According to the Committee, even when using a closed AI model, it may be appropriate “to anonymize or redact certain information that (clients deem) sensitive or that could create a risk of harm…”

Next, the Committee opined that managerial and supervisory obligations require firms to have policies in place that provide clear guidelines on permissible AI use by all lawyers and staff. Additionally, the Committee confirmed that lawyers must carefully review the accuracy of both their own AI-assisted work product and that prepared by subordinate lawyers and nonlawyers.

Finally, the Committee confirmed that Oregon lawyers must be aware of and comply with all court orders regarding AI disclosure. Additionally, they are required to carefully review and verify the accuracy of AI output, including case citations. Should an attorney discover that a court filing includes a false statement of fact or law, they must notify the court and correct the error, taking care to avoid disclosing client confidences.

For Oregon attorneys, this opinion is a “must read,” just as it is for lawyers in jurisdictions that have not weighed in on these issues. Regardless of your licensure, the release of this opinion, along with more than 15 others in such a short period of time, should be a wake-up call. The pace of change isn’t slowing. If you haven’t started learning about AI, now is the time. The technology is advancing quickly; failing to learn about it now will only make it harder to catch up. 

These opinions aren’t just academic—they’re a warning. To make informed, responsible decisions about how and when to use AI, lawyers need to start paying attention today.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


From Resistance to Reality: Lawyers Can No Longer Ignore AI

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

From Resistance to Reality: Lawyers Can No Longer Ignore AI

Historically, our profession’s response to emerging technologies has been lukewarm at best, sometimes peppered with contempt. For many lawyers, technology was viewed as an unwelcome and unnecessary intrusion into their purposefully analog world. 

However, as the pace of innovation accelerated, attitudes necessarily changed, with technology resistance proving futile and counterproductive. These shifts in perspective were often reinforced by timely ethics guidance, which helped clear the path for compliant technology adoption.

With the advent of generative artificial intelligence (AI), advancements have unfolded faster than ever before. During that time, I tracked the emergence of generative (AI) and the corresponding ethics opinions handed down by bar associations across the country. If you follow my column, you know that the rapidity of the response and the depth of the advice provided has been unprecedented and much-needed.

Jurisdictions have taken many approaches, with some providing general guidance, others drafting reports, and still others issuing ethics opinions. Texas, however, has taken a dual-pronged approach, releasing both a report and an ethics opinion. 

In July of last year, I wrote about the “Interim Report to the State Bar of Texas Board of Directors,” authored by the Texas Taskforce for Responsible AI in the Law, which addressed the benefits and risks of AI and included recommendations for the ethical adoption of these tools. 

Then, earlier this month, the Professional Ethics Committee for the State Bar of Texas released Opinion 705. In it, the Committee addressed the ethical issues raised under the Texas Disciplinary Rules of Professional Conduct when lawyers use generative artificial intelligence in the practice of law.

Notably, the Committee acknowledged that generative AI and technology, generally, are ever-changing. As a result, its guidance is “intended only to provide a snapshot of potential ethical concerns at the moment and a restatement of certain ethical principles for lawyers to use as a guide regardless of where the technology goes.”

The Committee’s roadmap for generative AI adoption did not differ significantly from that issued in other jurisdictions. Even so, it provided helpful steps for lawyers to take when choosing and adopting AI into their firms.

At the outset, technology competence, a necessary foundation for ethical AI implementation, was addressed. The Committee encouraged lawyers to be open-minded about technology that could reduce inefficiencies. AI clearly falls under that category but requires careful vetting prior to its adoption. Accordingly, lawyers intending to use AI in their firms “must have a reasonable and current understanding of the technology—because only then can the lawyer evaluate the associated risks of hallucinations or inaccurate answers, the limitations that may be imposed by the model’s use of incomplete or inaccurate data, and the potential for exposing client confidential information.”

The Committee provided a roadmap for implementation, offering suggested steps to take to ensure compliant AI adoption that preserves client confidentiality: 1) Acquire a foundational understanding of how the technology functions;  2) Review and, if necessary, renegotiate the terms of service the lawyer agrees to when using the generative AI tool; 3) Assess the generative AI tool’s data security measures, recognizing that even if user inputs aren’t intentionally shared, stored information may be vulnerable to hacking; and 4) Train lawyers and staff on the proper use of generative AI tools to safeguard client confidentiality.

The Committee cautioned lawyers about permitting AI tools to train on inputted data, highlighting the importance of understanding how a specific tool works, and suggested that in some cases, it may be advisable to obtain client consent before submitting confidential information as part of a query. The need to verify the accuracy of information provided by generative AI tools was also emphasized. 

Finally, the Committee determined that generative AI fees can be passed onto clients as long as they’ve consented. Furthermore, lawyers can only charge for the time spent on a client’s matter and cannot charge for time saved due to the efficiency gains offered by generative AI software.

With ethical guidance widely available, nothing stands in the way of AI adoption. Bar associations nationwide have provided clear ethics guidance outlining how to choose and use generative AI responsibly. The roadmap is there—understand the technology, assess risks, protect client data, and ensure compliance. The time for skepticism has passed; now, the focus is on responsible, informed implementation. Moving forward, the choice isn’t whether to engage with AI but how to do so ethically and effectively.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


AI in Law Firms: Ethics Committees Are Clearing the Path Forward

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

AI in Law Firms: Ethics Committees Are Clearing the Path Forward

Whenever new technologies become available to lawyers, ethical concerns initially pose a significant hurdle to adoption. This friction is understandable. Legal professionals are necessarily cautious about tinkering with trusted legal workflows and processes, sometimes resulting in a reluctance to embrace new and innovative ways of working. Our clients trust us with highly sensitive information, and we have an obligation to ensure that confidentiality is not compromised when implementing new software.

We rely on ethics opinions to assist with technology transitions, but their arrival isn’t always as timely as we’d like. Historically, ethics committees have taken years to weigh in on issues like lawyers' use of social media or cloud computing. That delay in issuing guidance often slows down or even prevents industry-wide technology adoption.

With the arrival of generative artificial intelligence (AI), a roadmap to ethical adoption was needed, and quickly, given the unprecedented rate of advancement. Fortunately, bar associations nationwide rose to the occasion, issuing timely and in-depth guidance in months, not years. Since the spring of 2023, many jurisdictions released guidance or opinions on the ethics of using AI in law firms: California, Florida, New Jersey, Michigan, New York, Pennsylvania, Kentucky, the American Bar Association, Virginia, D.C., and New Mexico. 

Most recently, North Carolina joined their ranks in November, handing down 2024 Formal Ethics Opinion 1. In the opinion, the Ethics Committee addressed six inquiries about ethical AI adoption.  

First, the Committee concluded that lawyers may integrate AI into their practice as long as they do so competently, protect client confidentiality, and properly supervise AI-generated work. This aligns with existing ethical obligations requiring attorneys to maintain competence, safeguard client information, and oversee nonlawyer assistants.

Second, the opinion addressed whether attorneys can share client information with third-party AI programs. According to the Committee, lawyers may use these tools only after they have fully vetted the provider and verified that adequate security measures are in place to prevent unauthorized access or disclosure, thus ensuring compliance with confidentiality rules.

Third, the Committee reviewed ethical obligations when using in-house AI systems, opining that doing so does not exempt lawyers from their ethical responsibilities. Attorneys must implement strong security measures and remain vigilant against potential vulnerabilities, maintaining the same diligence level as third-party AI services.

Fourth, the opinion emphasized that lawyers remain fully responsible for the accuracy and reliability of AI-generated pleadings submitted to the court. Attorneys must carefully review all AI outputs to ensure legal and factual accuracy, just as they would any other work product.

Fifth, the Committee considered whether AI use must be disclosed to clients. The Committee explained that attorneys do not need to inform clients if AI is used for routine tasks like legal research. However, consent is required if AI is employed for “substantive tasks in furtherance of the representation.” That scenario is the same as work performed by a nonlawyer or third-party service, both of which require client consent.

Finally, the issue of billing practices was addressed. The Committee determined that lawyers must bill clients only for the time spent performing the work, and cannot bill for time saved as a result of using AI. However, attorneys may charge clients for reasonable AI-related expenses, provided those costs are disclosed upfront. Notably, the Committee suggested that lawyers consider flat fee payment structures when drafting documents with AI.

Like the guidance that preceded it, this opinion provides North Carolina lawyers with a clear roadmap to AI adoption. The Committee reinforces AI’s benefits and offers practical steps to ensure compliance with ethical duties. With abundant ethics guidance available, there’s no excuse to fall behind. Now is the time to embrace AI and take full advantage of all it offers.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


NY Ethics Opinion Addresses Lawyer Imposter Accounts

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

NY Ethics Opinion Addresses Lawyer Imposter Accounts

Imitation is the sincerest form of flattery—or is it? What if the mimicry in question harms your professional reputation? Even worse, what if your reaction to it, or lack thereof, could negatively impact your license to practice law?

Recently, a New York immigration attorney grappled with those very questions and sought answers from the New York State Bar Association’s Committee on Professional Ethics in Ethics Opinion 1276 (online: https://nysba.org/ethics-opinion-1276-fake-social-media-accounts-offering-legal-services/).

In this case, the inquiring attorney had a very successful online presence. He posted videos to a popular social media platform explaining immigration information and procedural processes to the general public. Eventually, he discovered that someone had been using his name, photo, and immigration videos to create fake accounts on the same social networking site. 

Fearing that legal consumers would be scammed into contacting and possibly trying to hire the unknown imposter, he reported the accounts and requested their removal, to no avail. He also posted videos using his social media account warning his followers about the issue.

After doing so, he was concerned that he could be ethically required to take further action, so he submitted this query to the Committee: “Does a lawyer have any ethical obligations with regard to fake social media accounts in which scammers use the lawyer’s social media videos to attract clients?”

In reaching its determination, the Committee first considered whether any affirmative misconduct on the inquirer’s part had occurred. The Committee answered in the negative, explaining that instead, “he is the victim of misconduct by others (who may not even be lawyers), and those others – over Inquirer’s objection – are engaging in improper conduct. Far from assisting or inducing the scammers to engage in this conduct, Inquirer is trying to impede or stop the scammers and to warn the public about He therefore is not violating Rule 8.4(a)-(c).”

The next issue addressed was whether he acted unethically by failing to respond more assertively to the fake accounts. According to the Committee, three Rules of Professional Conduct require lawyers to report certain types of misconduct. Two of those rules (Rules 3.3(a)(3) and 3.3(b)) were deemed inapplicable since they require lawyers to report behavior occurring “before a tribunal” or “during a proceeding,” respectively. Because the conduct was occurring online and had nothing to do with a pending case, there was no obligation under the rules to report the conduct.

The Committee then examined the third situation outlined in Rule 8.3(a) and concluded that it, too, did not apply to the case at hand: “Rule 8.3(a) provides that a lawyer who “knows that another lawyer has committed a violation of the Rules of Professional Conduct that raises a substantial question as to that lawyer’s honesty, trustworthiness or fitness as a lawyer shall report such knowledge to a tribunal or other authority empowered to investigate or act upon such violation." This provision does not apply here because the Inquirer does not know whether any person creating fake social media accounts is a lawyer.

Based on its analysis, the Committee concluded that the inquiring attorney had no duty under the Rules to report or otherwise combat the fake accounts. 

While the fake accounts are undoubtedly frustrating, the good news is the lawyer doesn’t have to worry about any ethical fallout. The Committee made it clear—being impersonated online isn’t something the Rules of Professional Conduct require lawyers to address proactively.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


 

 

 

 

 


Florida Bar Releases Handy Generative AI Guide for Lawyers

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

Florida Bar Releases Handy Generative AI Guide for Lawyers

One of the most common questions I hear from lawyers about generative artificial intelligence (AI) is “Where do I start?” The pace of change since the release of ChatGPT in November 2022 has been dramatic—so much so that tracking the latest GAI developments can feel like a full-time job. 

If you haven’t prioritized AI education, then getting up to speed can be a challenge. And even then, once you wrap your head around the basic concepts, there’s still a long way to go. You’ll need to determine your firm’s needs, learn about the different types of tools available, and understand ethical compliance issues.

Armed with that knowledge, the next steps are to research and vet providers–including the tools your firm is already using that may have embedded GAI–and then choose and implement the tools that will work best for your firm. After that, firmwide education and training should follow.

It’s a lot to think about, isn’t it? No wonder so many lawyers feel overwhelmed.

Fortunately, bar associations have risen to the occasion, regularly issuing ethics opinions and AI guidance over the past two years. A notable and very recent example is the “Florida Bar Guide to Getting Started with AI.” It provides a user-friendly,  broad overview of AI and generative AI and includes definitions, explanations of the technologies, an analysis of ethical issues, implementation advice, practical resources, and much more.

Notably, the authors emphasize the importance of technology competence and making educated decisions about adopting legal technology, including AI: “Each lawyer should explore and make the decision whether to use AI or not based on their individual practices and circumstances, being mindful of applicable ethical rules as well as any unique risks from using particular AI models.”

The authors also highlight the differences between different AI tools and explain how legal-specific AI tools reduce errors in output by training on highly relevant legal data: There are general and law-specific AI models. General models are trained on large sets of human-created data, while legal models take a general model and fine-tune it using law-specific data, such as court opinions, law review articles and example documents. Legal models usually have constraints on the sources of information they use in creating their responses, which are intended to reduce hallucination risk.”

As with any technology, carefully vetting the vendor and its product is essential when choosing an AI provider: “When you find a general AI vendor you like, check its security reputation, hallucination risk, various AI model features, and paid plan options for individuals or businesses.”

An important takeaway from the guide is that the current state of technology requires that all responses be carefully reviewed to identify any errors:  “(A)lways verify AI-generated outputs yourself to ensure accuracy and reliability, as AI should assist, not replace, human judgment.”

Supervisory responsibilities are also called out, with an emphasis on the need to ensure internal firm guidance and procedures are in place before implementing AI tools: “(I)f associates or nonlawyers will be using AI in your firm, consider a user training program and written guidelines for proper AI usage for client matters.”

Finally, examples of use cases for both general and legal-specific AI are provided. The authors explain that general AI software could be used to draft administrative letters or marketing articles, generate summaries of non-legal documents, and customize presentations. Work that can be completed using task-appropriate legal AI tools includes legal research, document review, document drafting, case preparation, and electronic discovery.

If you’re one of those lawyers who isn’t sure how to get started with GAI, the Florida Bar’s guide is an ideal resource. It breaks down the basics, explains key ethical considerations, and offers practical advice for choosing and implementing AI tools. Staying informed and proactive is essential, and this guide gives you the knowledge and tools you need to approach AI confidently and responsibly.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


The Year Ahead in Legal Tech: AI, Innovation, and Opportunity

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

The Year Ahead in Legal Tech: AI, Innovation, and Opportunity

Looking back on 2024, this Grateful Dead lyric comes to mind: “What a long, strange trip it’s been.” It perfectly captures the upheaval of the last four years, which were nothing if not unpredictable and tumultuous. A worldwide pandemic closed our borders—and our offices—but we never stopped working. Business carried on as usual even as we struggled to wrap our minds around the realities of living in the midst of a deadly, highly contagious virus.  

Technology saved the day. Without it, our world would have come to a grinding halt. Instead, it ushered in a newfound receptivity to cloud and remote working software, priming us for what came next: the generative AI era. 

In late 2022, just as normalcy seemed to return, generative AI became a catalyst for unprecedented change with the release of GPT 3.5. Its release marked a turning point. From there, technological advancement occurred at a rapid clip, with 2024 seeing the continued integration of generative artificial intelligence (AI) into the tools legal professionals rely on. 

The pace of AI development over the past year, however, was slower than many had predicted. Nevertheless, the impact on the practice of law overall was significant. Legal professionals continued to learn about and experiment with generative AI for many tasks, including legal research, document drafting and editing, brainstorming, and more. 

In fact, according to the 2025 AffiniPay Legal Industry Report, which will be published in the spring, one-fifth of firms have already adopted legal-specific generative AI tools. Personal adoption was even more significant. For example, 47% of immigration practitioners reported personally using generative AI for work-related purposes.

In the coming year, you can expect to see a heightened pace of AI development with generative AI appearing as the interface in all the tools you regularly use in your law firm. From legal research and practice management to legal billing and knowledge management, generative AI conversational interactions will increasingly be the mechanism through which you access all of the information you need to effectively represent your clients’ interests.

You’ll also notice that generative AI will be more deeply embedded into your firm’s IT stack, enabling in-depth analysis of your office's data, including client matters, documents, finances, billable hours, employee productivity, and more. This ability to easily access the metrics needed to run a productive, efficient, and profitable practice will make all the difference and will enable firms to scale and compete more easily in an increasingly competitive, AI-driven legal marketplace. 

Additionally, as generative AI becomes seamlessly embedded into everyday tools, you might not even realize you’re using it. One immediate effect of this deeper-level integration will be that court rules banning AI-generated documents will quickly become outdated and impractical, in part because they could effectively prohibit lawyers from using essential technology altogether. 

Another notable trend in 2025 will be continued regulatory changes and further ethics guidance. Bar associations will issue additional ethics opinions and guidelines that provide roadmaps for compliant AI implementation, effectively removing the remaining barriers that stand in the way of broad-scale adoption. 

Similarly, regulatory changes impacting bar exam and licensure requirements highlight a broader effort to make legal services more accessible. As states revisit licensure rules and AI ethics frameworks evolve, the legal landscape will continue to shift in the face of these efforts to balance innovation with the profession’s core principles.

In other words, if you thought the last few years brought unwelcome upheaval, brace yourself—there’s more to come. Rest assured, our profession won’t be immune from the changes and will likely be impacted far more than others. 

So get ready. Dive in and ensure you’re maintaining technology competence. Sign up for tech-related CLEs, experiment with generative AI, and learn as much as you can about emerging and innovative technologies. 2025 is sure to be a year for the record books, and now is the time to prepare yourselves for what will come. 

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


The Risks of Using Dropbox for Client Files

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

The Risks of Using Dropbox for Client Files

Years ago, after the American Bar Association published my “Cloud Computing for Lawyers” book, I was often asked to speak to lawyers about the benefits and risks of implementing cloud computing in law offices. At the time, most lawyers weren’t sure what cloud computing was but were nevertheless confident that they didn’t trust it and didn’t want to adopt it into their firms. 

Conversely, most of them were already using it and just didn’t realize it.

I know this because I would often begin my talks by asking how many people in the room used cloud computing tools. Inevitably only a few attendees would respond affirmatively. Then, I would ask how many in the audience had shared files using Dropbox, and at least half of the people in the room would raise their hands. In other words, most lawyers were using cloud computing, whether it was Dropbox, Box, or Gmail, and simply didn’t realize it.

Fast-forward to the present day, and how times have changed! According to data from the 2022 MyCase Legal Industry Report, the vast majority (80%) of legal professionals surveyed reported that their firms had cloud computing tools in place in their workplaces post-pandemic. Before the pandemic, most survey data showed that cloud computing adoption in the legal profession had remained stable for a number of years at just under 40%.

Despite the increased adoption, the risks associated with cloud computing haven’t changed. As part of your duty of technology competence, it’s essential to carefully vet cloud providers to ensure that your firm’s confidential data is securely stored and encrypted. Whenever you entrust your law firm’s data to a third party you must ensure that you fully understand the procedures and protections in place. This duty includes obtaining information as to how the data will be handled by that company, where the servers on which the data will be stored are located, who will have access to the data, and how often and when it will be backed up, among other things.

Also important is ensuring that the software you choose has features that will protect your client data and that you and your employees receive the necessary training and are familiar with the program's features. The failure to provide proper training and choose a secure platform designed for law firms that includes the features needed to protect confidential data can have unintended consequences.

Case in point: a recent disciplinary reprimand issued by the Indiana Supreme Court. At issue in Matter of James H. Lockwood, Supreme Court Case No. 24S-DI-319, was the respondent’s failure to secure client files stored in Dropbox. 

Specifically, Lockwood had represented a client in a protective order case, and that same client had also worked at Lockwood’s firm for several months as an unpaid non-attorney assistant. During that timeframe, Lockwood provided the client with a Dropbox cloud storage link that provided access to firm materials and client files. The client stopped working for the firm in January 2023, but Lockwood failed to secure or deactivate the Dropbox link, which remained active and unsecured at least through May 2024.

Based on that conduct, the Court concluded that he had violated Indiana Professional Conduct Rules 1.6, which prohibited “revealing information relating to representation of a client without the client’s informed consent.”

This mishap could have been prevented by using software specifically designed for legal professionals. Legal-specific tools address lawyers’ ethical obligations and ensure compliance with confidentiality and data security requirements. These cloud tools often include features like encryption, controlled permissions and access, and activity tracking to ensure client information stays protected.

The lesson to be learned: if your firm still relies on general-use software like Dropbox, it’s time to reconsider your choices and transition to tools designed specifically for legal professionals. Legal-specific platforms address the unique needs of law firms, offering the security and compliance features needed to protect client data and conform to professional standards. Now is the time to make this change to protect client data —- — and your law license — from unnecessary and preventable risk.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


Judicial Ethics: Navigating the AI Era

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

Judicial Ethics: Navigating the AI Era

Over the past two years, generative artificial (AI) ethics guidance has been plentiful, with many State Bars swiftly responding to the increasing need for AI adoption advice. Within months of ChatGPT’s initial release in November 2022, the risks of using generative AI in legal practice were alarmingly clear, as captured in numerous sensational headlines. The benefits were also evident, with the speed of adoption outpacing the necessary learning curve needed to utilize these tools competently. As more AI ethics opinions were issued, a clear path to ethical adoption emerged for lawyers.

But what about judges and court staff? Generative AI offers obvious benefits that could significantly increase efficiencies and remove tedium from their daily workflows by streamlining legal research and the drafting of orders and opinions. Of course, ethical implementation of AI by the courts is essential, and while the risks presented are similar to those encountered by lawyers, there are also considerations unique to the judiciary.

The good news is that some guidance is available. For starters, in October 2023, two different judicial ethics opinions were released. The first was JIC Advisory Opinion 2023-22.  It was issued on October 13, 2023 by the West Virginia Judicial Investigation Commission. 

The Commission determined that judges may use AI for research purposes but not when deciding the outcome of a case. Additionally, the Committee advised that extreme caution should be taken when using AI to assist with drafting orders or opinions. Finally, the Commission emphasized the importance of maintaining technology competence when using AI, clarifying that the duty was ongoing.

Later that month, on October 27, 2023, judicial ethics opinion JI-155 was issued in Michigan. The focus of this opinion was technology competence. Like the West Virginia opinion, judges were advised to maintain technology competence regarding technology, including AI: “(J)udicial officers have an ethical duty to maintain technological competence and understand AI’s ethical implications to ensure efficiency and quality of justice (and) take reasonable steps to ensure that AI tools on which their judgment will be based are used properly and that the AI tools are utilized within the confines of the law and court rules.”

More recently, Delaware and Georgia issued orders addressing the judiciary's use of AI. On October 21, 2024, the Delaware Supreme Court adopted an interim AI policy for judges and court personnel (online: https://courts.delaware.gov/forms/download.aspx?id=266848). It requires users to maintain technology competence and outlines the appropriate usage of authorized AI tools, including the requirement that “(u)sers may not delegate their decision-making function to Approved GenAI.” 

The State of Georgia’s Order related to the formation of its Ad Hoc Committee on Artificial Intelligence (online: https://jcaoc.georgiacourts.gov/wp-content/uploads/2024/10/AI_Committee_Orders.pdf). The Order appointed sixteen people to the committee whose mission is to assess “the risks and benefits of the use of generative AI on the courts and to make recommendations to ensure that the use of AI does not erode public trust and confidence in the judicial system.”

While guidance for the judiciary has been less plentiful, it remains valuable. These guidelines offer a clear roadmap for adopting AI responsibly, ensuring that judicial integrity is preserved throughout implementation. As AI technology advances rapidly, the judiciary must keep pace by leveraging AI’s potential to streamline processes and improve the quality of justice. By committing to continuous education and adhering to these standards, courts can gain the benefits of AI while upholding judicial integrity and maintaining public trust.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].

 

 


Amid a Flurry of AI Ethics Opinions, New Mexico Weighs In

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

Amid a Flurry of AI Ethics Opinions, New Mexico Weighs In

Did you know that in less than two years, more than ten U.S. jurisdictions have issued guidance on generative artificial intelligence (AI)? For the past two decades, I’ve written about legal technology. My goal has always been to help legal professionals navigate the twists and turns of 21st-century innovations. From blogging and social media to cloud and mobile computing, I’ve encouraged members of my profession to actively learn about and implement technology into their practices.

Initially, my efforts felt like swimming upstream. Very few colleagues were receptive, and only the most tech-savvy showed interest in new tools and platforms. Adoption rates were slow, and my attempts to educate were often met with indifference.

Then, in early 2020, the pandemic struck, forcing lawyers to work remotely, conduct meetings online, and rely heavily on cloud-based tools. Attitudes shifted almost overnight, leading to a dramatic spike in technology adoption.

In many ways, the pandemic had the effect of priming legal professionals to be open to new tools and ways of working. This change of heart could not have come at a better time, and when generative artificial intelligence (AI) was unleashed, attorneys were immediately receptive and curious about its potential to streamline their workflows and increase law firm profitability.

When GPT 3.0 was released in November 2022, it amounted to a technological tidal wave whose impact on the practice of law continues to be felt today. The amount of ethics guidance handed down over the past two years focused on a single technology is unprecedented. This rapid response reflects both heightened concerns about potential risks and the acknowledgment of the potentially significant impact that AI could have on the practice of law. By my count, at least eleven jurisdictions have issued guidance or opinions on the ethics of using AI in law firms: California, Florida, New Jersey, Michigan, New York, Pennsylvania, Kentucky, the American Bar Association, Virginia, and D.C.

Most recently, New Mexico joined their ranks, issuing Formal Ethics Advisory Formal Opinion 2024-005 (Online: https://www.sbnm.org/Portals/NMBAR/GenAI%20Formal%20Opinion%20-%20Sept_2024_FINAL.pdf). At issue was whether lawyers may use generative AI in the practice of law. The short answer? Yes.

The State Bar of Mexico Ethics Advisory Committee determined that generally speaking, “the responsible use of Generative AI is consistent with lawyers’ (ethical) duties.” According to the committee, generative AI offers many potential benefits for lawyers and their clients, increasing efficiency and reducing costs for clients. 

The Committee offered a number of examples of use cases, which include the initial drafting of legal documents and routine correspondence, assisting with drafting complex contracts or cross-examining witnesses, and streamlining discovery. 

Importantly, the Committee clarified that lawyers are not required to use this technology, but “those lawyers who choose to do so…must do so responsibly, recognizing that the use of Generative AI does not change their fundamental duties under the Rules of Professional Conduct.” 

Interestingly, the Committee offered a unique take on the risk of law firm data being used to train AI models. According to the Committee, conflict of interest issues could be triggered when using generative AI since “there is a risk that future outputs may use information relating to the prior representation or concurrent representation by another lawyer in the same firm in a way that disadvantages the prior/other client.” The Committee cautioned that if lawyers are unable to verify a lack of a conflict, they should avoid inputting confidential client data into a generative AI tool unless they’ve confirmed that the tool possesses safeguards that “protect prior client information and…screen potential conflicts.”

The Committee also addressed many other ethical issues that are implicated when lawyers use generative AI, including confidentiality, candor toward the tribunal, AI costs and billing, and supervisory issues. Make sure to read the full opinion for their in-depth analysis of these topics, especially if you happen to practice law in New Mexico.

No matter where you practice, one thing is clear: keeping up with the pace of change is essential. Given generative AI's rapid advancement, it is more important than ever to stay informed, uphold ethical standards, and take full advantage of AI’s benefits. By doing so, you’ll be well-positioned to thrive, ultimately providing better client service and staying ahead of the curve in an increasingly competitive legal marketplace.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].