Previous month:
August 2024
Next month:
October 2024

Beyond Simple Tools: vLex's Vincent AI and the Future of Trusted Legal AI Platforms

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

Beyond Simple Tools: vLex's Vincent AI and the Future of Trusted Legal AI Platforms

There has been a noticeable shift in the way that legal technology companies are approaching generative artificial intelligence (AI) product development. Last year, several general legal assistant chatbots were released, mimicking the functionality of ChatGPT. Next came the emergence of single-point solutions, often developed by start-ups, to address specific workflow challenges such as drafting litigation documents, analyzing contracts, and legal research. 

As we approach the final quarter of 2024, established legal technology providers are more deeply integrating generative AI into comprehensive platforms, streamlining the user interface of legal research, practice management, and document management tools. Rather than standalone tools, generative AI is becoming a core feature of legal platforms, enabling users to access all their data and software seamlessly in one trusted environment.

A notable example is vLex, which acquired Fastcase last year and announced major updates to its Vincent AI product this week. Ed Walters, Chief Strategy Officer, described the update as the transformation from an AI-powered legal research and litigation drafting tool into a full-fledged legal AI platform.

This release expands workflows for transactional, litigation, and contract matters, enabling users to 1) analyze contracts, depositions, and complaints, 2) perform redline analysis and document comparisons, 3) upload documents to find related authorities, 4) generate research memoranda, 5) compare laws across jurisdictions, and 6) explore document collections to extract facts, create timelines, and more.

Similarly, both legal research companies Lexis Nexis and Thomson Reuters rolled out revamped versions of their generative AI assistants last month, reinforcing the trend toward AI-driven platforms. Lexis Nexis introduced Protégé, an AI assistant designed to meet each user’s specific needs and workflows, and serves as the gateway to a suite of Lexis Nexis products. Similarly, Thomson Reuters unveiled CoCounsel 2.0, an enhanced version of its AI assistant that was originally launched last year. Built on technology from its acquisition of Casetext’s CoCounsel, this upgraded legal assistant acts as the central interface for accessing many Thomson Reuters tools and resources, streamlining workflows across its products.

Despite the platform trend, single-point AI solutions remain valuable, especially for solo or small firms looking to streamline specific tasks like document analysis, drafting pleadings, or preparing discovery responses. These standalone tools continue to be developed and offer significant value for firms not already invested in a software ecosystem with integrated AI. If you’re in the market for an AI tool that accomplishes only one task, there’s most likely an AI tool available that fits the bill.

However, for many firms, AI integration into the software platforms they already use will likely be the most practical path forward. This approach helps to bridge the implementation gap and addresses common concerns about trust, which are often barriers to AI adoption. By partnering with trusted legal technology providers, firms can more comfortably adopt AI by leveraging the security and reliability of the platforms already in place.

With the deeper integration of AI into comprehensive legal platforms, the adoption process will become smoother, allowing legal professionals to enjoy the benefits of the reduced friction and tedium resulting from more streamlined law firm processes. This shift will allow legal professionals to focus on more meaningful work, improving both the practice of law and client service. 

Whether a standalone product or built into legal software platforms, generative AI offers significant potential, some of which is already being realized. It’s more than just another tool—it may very well redefine how law firms operate, paving the way for a more efficient and effective future.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


Legal Ethics in the AI Era: The NYC Bar Weighs In

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

Legal Ethics in the AI Era: The NYC Bar Weighs In

Since November 2022, when the release of ChatGPT was first announced, many jurisdictions have released AI guidance. In this column, I’ve covered the advice rendered by many state ethics committees, including California, Florida, New Jersey, Michigan, New York, Pennsylvania, Kentucky, the American Bar Association, and most recently, Virginia.

Now, the New York City Bar Association has entered the ring, issuing Formal Ethics Opinion 2024-5 on August 7th. The New York City Bar Association Committee of Professional Ethics mirrored the California Bar’s approach and provided general guidelines in a chart format rather than proscriptive requirements. The Committee explained that “when addressing developing areas, lawyers need guardrails and not hard-and-fast restrictions or new rules that could stymie developments.” Instead, the goal was to provide assistance to New York attorneys through “advice specifically based on New York Rules and practice…”

Regarding confidentiality, the Committee distinguished between “closed systems” consisting of a firm’s “own protected databases,” like those typically provided by legal technology companies, and systems like ChatGPT that share inputted information with third parties or use it for their own purposes. Client consent is required for the latter, and even with “closed systems,” confidentiality protections within the firm must be maintained. The Committee cautioned that the terms of use for a generative AI tool should be reviewed regularly to ensure that the technology vendor is not using inputted information to train or improve its product in the absence of informed client consent.

Turning to the duty of technology competence, the Committee opined that when choosing a product, lawyers “should understand to a reasonable degree how the technology works, its limitations, and the applicable [T]erms of [U]se and other policies governing the use and exploitation of client data by the product.” Also emphasized was the need to avoid delegating professional judgment to these tools and to consider generative AI outputs to be a starting point. Not only must lawyers ensure that the output is accurate, but they should also take steps to “ensure the content accurately reflects and supports the interests and priorities of the client in the matter at hand.”

The duty of supervision was likewise addressed, with the Committee confirming that firms should have policies and training in place for lawyers and other employees in the firm regarding the permissible use of this technology, including ethical and practical uses, along with potential pitfalls. The Committee also advised that any client intake chatbots used by lawyers on their websites or elsewhere on behalf of the firm should be adequately supervised to avoid “the risk that a prospective client relationship or a lawyer-client relationship could be created.”

Not surprisingly, the Committee required lawyers to be aware of and comply with any court orders regarding AI use. Another court-related issue addressed was AI-created deepfakes and their impact on the judicial process. According to the Committee’s guidance, lawyers must screen all client-submitted evidence to assess whether it was generated by AI, and if there is a suspicion “that a client may have provided the lawyer with Generative AI-generated evidence, a lawyer may have a duty to inquire.”

Finally, the Committee turned to billing issues, agreeing with other jurisdictions that lawyers may charge for time spent crafting inquiries and reviewing output. Additionally, the Committee explained that firms may not bill clients for time saved as a result of AI usage and that firms may want to explore alternative fee arrangements in order to stay competitive since AI may significantly impact legal pricing moving forward. Last but not least, any generative AI costs should be disclosed to clients, and any costs charged to clients “​​should be consistent with ethical guidance on disbursements and should comply with applicable law.” 

The summary above simply provides an overview of the guidance provided. For a more nuanced perspective, you should read the opinion in its entirety. Whether you’re a New York lawyer or practice elsewhere, this guidance is worth reviewing and provides a helpful roadmap for adoption as we head into an AI-led future where technology competence is no longer an option. Instead, it is an essential requirement for the effective and responsible practice of law.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].


Practical and Adaptable AI Guidance Arrives From the Virginia State Bar 

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

Practical and Adaptable AI Guidance Arrives From the Virginia State Bar 

If you're concerned about the ethical issues surrounding artificial intelligence (AI) tools, the good news is that there's no shortage of guidance. A wealth of resources, guidelines, and recommendations are now available to help you navigate these concerns. 

Traditionally, bar associations have taken years to analyze the ethical implications of new and emerging technologies. However, generative AI has reversed this trend. Ethics guidance has emerged far more quickly, which is a very welcome change from the norm.

Since the general release of the first version of ChatGPT in November 2022, ethics committees have stepped up to the plate and offered much-needed AI guidance to lawyers at a remarkably rapid clip. Jurisdictions that have weighed in include California, Florida, New Jersey, Michigan, New York, Pennsylvania, Kentucky, and the American Bar Association. 

Recently, Virginia entered the AI ethics discussion with a notably concise approach. Unlike the often lengthy and detailed analyses from other jurisdictions, the Virginia State Bar issued a streamlined set of guidelines, available as an update on its website (accessible online at the bottom of the page: https://vsb.org/Site/Site/lawyers/ethics.aspx). This approach stands out not only for its brevity but also for its focus on providing practical, overarching advice. By avoiding the intricacies of specific AI tools or interfaces, the Virginia State Bar has ensured that its guidance remains flexible and relevant, even as the technology rapidly evolves.

Importantly, the Bar acknowledged that regardless of the type of technology at issue, lawyers’ ethical obligations remain the same: “(A) lawyer’s basic ethical responsibilities have not changed, and many ethics issues involving generative AI are fundamentally similar to issues lawyers face when working with other technology or other people (both lawyers and nonlawyers).”

Next, the Bar examined confidentiality obligations, opining that just as lawyers must review data-handling policies relating to other types of technology, so, too, must they vet the methods used by AI providers when handling confidential client information. The Bar explained that while legal-specific providers can often promise better data security, there is still an obligation to ensure a full understanding of their data management approach: “Legal-specific products or internally-developed products that are not used or accessed by anyone outside of the firm may provide protection for confidential information, but lawyers must make reasonable efforts to assess that security and evaluate whether and under what circumstances confidential information will be protected from disclosure to third parties.”

One area where the Bar’s approach conformed to that of most jurisdictions was client consent. While the ABA suggested explicit client consent when using AI was required in many cases, the Bar agreed with most other ethics committees, concluding that there “is no per se requirement to inform a client about the use of generative AI in their matter” unless there are extenuating circumstances like an agreement with the client or increased risks like those encountered when using consumer-facing products.

The Bar also considered supervisory requirements, emphasizing the importance of reviewing all output just as you would regardless of the source. According to the Bar, as “with any legal research or drafting done by software or by a nonlawyer assistant, a lawyer has a duty to review the work done and verify that any citations are accurate (and real)” and that that duty of supervision “extends to generative AI use by others in a law firm.”

Next, the Bar provided insight into the impact of AI usage on legal fees. The Bar agreed that lawyers cannot charge clients for the time saved as a result of using AI: “A lawyer may not charge an hourly fee in excess of the time actually spent on the case and may not bill for time saved by using generative AI. The lawyer may bill for actual time spent using generative AI in a client’s matter or may wish to consider alternative fee arrangements to account for the value generated by the use of generative AI.”

On the issue of passing the costs of AI software on to clients, the Bar concluded that doing so was not permissible unless the fee is both reasonable and “permitted by the fee agreement.”

Finally, the bar focused on a handful of recent court rules issued that forbid the use of AI for document preparation, highlighting the importance of being aware of and complying with all court disclosure requirements regarding AI usage.

The Virginia State Bar’s flexible and practical AI ethics guidance offers a valuable framework for lawyers as they adjust to the ever-changing generative AI landscape. By focusing on overarching principles, this thoughtful approach ensures adaptability as technology evolves. For those seeking reliable guidance, Virginia’s model offers a useful roadmap for remaining ethically grounded amid unprecedented technological advancements.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].