Generative AI in Law: Resistance is Futile
I started writing this column on legal technology in 2007, and over the years I’ve noticed a pattern. Time and time again, whenever a new technology comes along that impacts the practice of law, members of our profession tend to have a knee-jerk reaction to it. There’s talk of “bans,” declarations of significant consequences due to related ethical violations, and dire warnings that the sky is about to plummet to the earth.
First, it was blogging, followed by social media, mobile phones, tablets, cloud computing, and artificial intelligence. As each new technology emerged on the scene, there was collective outrage, disdain, and promises of imminent regulatory peril. Purported curmudgeonly experts - especially those whose job functions were imperiled by each new wave of technology - prophesied looming and significant threats to law licenses, client confidentiality, and the reputation of the profession as a whole. Each new technology was viewed as a threat to the very foundation of the practice of law.
Of course, this pattern started long before I entered the world of legal technology. Lawyers have always been suspicious of technology. PCs, faxes, the internet, online legal research, and email were met with wariness, skepticism, and sometimes even outrage.
Our profession is far more comfortable with precedent than radical evolution, but as we know, every time a new technology is introduced, it beings with it the promise of change. So of course it’s predictable that the now-familiar pattern of setting up roadblocks to adoption will occur in due haste whenever a cutting-edge technology intrudes on our change-resistant legal profession.
Examples of technology adoption hurdles often put in place by ethics committees and others when new technologies are adopted by lawyers include outright bans, requiring signed client consent or published disclaimers, and imposing obligations to notify or obtain permission from judges when using it. Eventually, however, as specific types of technology become more commonplace and familiar, these requirements are eased over time and eventually eradicated entirely.
With the recent explosion of newly released generative artificial intelligence (AI) tools like ChatGPT and Google Bard and their rapid adoption by legal professionals, we’re seeing the same pattern of reticence emerge across the legal landscape, from the hallowed halls of law schools to our esteemed courtrooms.
The use of generative AI in litigation has been prohibited by some judges. In one instance, Judge Brantley D. Starr of the US District Court for the Northern District of Texas issued a standing order in April requiring lawyers to certify that generative AI tools were not used to assist with drafting any papers filed with the court. Similarly, U.S. Court of International Trade Judge Stephen Vaden likewise issued an order in June that required lawyers appearing in his court to certify that “any submission(s)...that contain…text drafted with the assistance of a generative artificial intelligence program…be accompanied by: (1) A disclosure notice that identifies the program used and the specific portions of text that have been so drafted; (2) A certification that the use of such program has not resulted in the disclosure of any confidential or business proprietary information to any unauthorized party…”
Law schools have also jumped onto the “ban ChatGPT” bandwagon. In April, Berkeley Law School was one of the first to impose restrictions on the use of generative AI by its students. The school released a policy that prohibited students from using it “on exams or to compose any submitted assignments,” and only permitted them to use it “to conduct research or correct grammar.”
More recently, generative AI use was targeted in law school applications. In mid-July, the University of Michigan law school announced that prospective law students were banned from using generative AI tools to assist with the preparation of personal statements.
Fortunately, there are some forward-thinking members of the legal profession who are accepting the inevitability of rapid technological change and are embracing rather than fighting the adoption of generative AI into our profession. In January, Dean Andrew Perlman of Suffolk University Law School suggested that law school students should be taught how to use generative AI as one of the many useful tools in their legal research and writing arsenal.
In other words, he believes that law students (and lawyers) should learn about generative AI and make educated decisions about how to responsibly and ethically use it to streamline legal work and increase efficiencies. If you ask me, that sounds an awful lot like that pesky duty of technology competence, which is a key ethical obligation for lawyers practicing law in the digital age. Funny how that works, isn’t it?
Nicole Black is a Rochester, New York attorney, author, journalist, and the Head of SME and External Education at MyCase legal practice management software, an AffiniPay company. She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at [email protected].