U.S. Supreme Court Holds Expectation of Privacy in Cell Phone Geolocation Data

Stacked3Here is a recent Daily Record column. My past Daily Record articles can be accessed here.

*****

Smartphones have become central to the lives of most Americans. We count on our phones to keep us connected to the world. Because our phones handle so many pivotal functions for us, we’ve become increasingly reliant on them. They’ve have become so much a part of our day-to-day lives that, if you’re anything like me, you feel a bit lost when you realize you’ve misplaced your phone.

Our phones are important to us because of their utility, in part because they instantaneously provide us with incredibly relevant and up-to-date data and information about the world around us. Of course, much of that usefulness is derived from the massive amounts of personal data collected by our phones and the apps running on them. That data serves as the basis for a more personalized and functional experience.

Unfortunately, the very same data the makes our phones so valuable to us can also be used against us, sometimes by criminals, and other times by law enforcement. Last month, the United States Supreme Court considered the latter situation in Carpenter v. U.S., No. 16-402, 585 U.S. ____ (2018).  At issue was whether governmental access to historical geolocation cell phone data in order to ascertain a user’s movements constitutes a search.

Importantly, at the outset, the Court explained that careful vigilance was required when applying Fourth Amendment jurisprudence to the technological advancements that provide law enforcement with increasingly invasive access to personal information: “We have kept…Founding-era understandings in mind when applying the Fourth Amendment to innovations in surveillance tools. As technology has enhanced the Government’s capacity to encroach upon areas normally guarded from inquisitive eyes, this Court has sought to ‘assure preservation of that degree of privacy against government that existed when the Fourth Amendment was adopted.’”

The Court then turned to an examination of the specific type of information at issue in the case at hand: cell phone geolocation data. The Court noted that it is nearly impossible for users to prevent the collection and storage of their phone’s geolocation data: “Apart from disconnecting the phone from the network, there is no way to avoid leaving behind a trail of location data. As a result, in no meaningful sense does the user voluntarily “assume the risk” of turning over a comprehensive dossier of his physical movements.”

Next the Court considered whether stored geolocation data was protected by the Fourth Amendment and concluded the it was: “Given the unique nature of cell phone location records, the fact that the information is held by a third party does not by itself overcome the user’s claim to Fourth Amendment protection. Whether the Government employs its own surveillance technology as in Jones or leverages the technology of a wireless carrier, we hold that an individual maintains a legitimate expectation of privacy in the record of his physical movements as captured through (cell phone location information).”

The Court explained that because there is an expectation of privacy in a phone’s geolocation data stored on third party servers, a warrant is required in order for the government to access it: “The Government’s acquisition of the cell-site records was a search within the meaning of the Fourth Amendment…Having found that the acquisition of Carpenter’s CSLI was a search, we also conclude that the Government must generally obtain a warrant supported by probable cause before acquiring such records.”

Of note, the Court clarified that although a warrant is generally required to access stored geolocation data, said requirement was inapplicable in the face of exigent circumstances.

Finally, the Court wisely recognized its duty to “ensure that the ‘progress of science’ does not erode Fourth Amendment protections.” Given the rapid rate of technological advancement that we’ve seen over the past decade and the fact the pace of change will only increase exponentially in the years to come, this acknowledgement was reassuring.

Technology provides incredible benefits, but privacy issues abound. Protections from unfettered governmental access to the increasingly personal data collected by our phones are needed now more than ever. The Court’s holding in this case strikes the right balance and provides much-needed guidance in the midst of a turbulent and increasingly invasive technological landscape.

Nicole Black is a Rochester, New York attorney, author, journalist, and the Legal Technology Evangelist at MyCase  law practice management software. She is the author of the ABA book Cloud Computing for Lawyers, co-authors the ABA book Social Media for Lawyers: the Next Frontier, and co-authors Criminal Law in New York, a Thomson Reuters treatise. She writes legal technology columns for Above the Law and ABA Journal and speaks regularly at conferences regarding the intersection of law and technology. She can be reached at niki.black@mycase.com.

 


Round Up: Law School Advice, Legal Billing Software, Online Collaboration and More

SpiralI often write articles and blog posts for other outlets and am going to post a round up here from time to time (but won't include my weekly Daily Record articles in the round up since I re-publish them to this blog in full). Here are my posts and articles from June 2018:


Pennsylvania court on social media evidence authentication

Stacked3Here is a recent Daily Record column. My past Daily Record articles can be accessed here.

*****

Pennsylvania court on social media evidence authentication

Social media use is pervasive. People communicate online many times everyday. Importantly, those online interactions create digital footprints that can prove to be invaluable - and sometimes detrimental to - litigation.

Of course, the somewhat transient and unverifiable nature of online engagement can present problems for lawyers seeking to use social media evidence during litigation. Because it’s so easy for people to interact anonymously or to impersonate others online, lawyers sometimes encounter difficulties when attempting to authenticate social media evidence at trial.

The Superior Court of Pennsylvania recently provided some guidance in this regard in Commonwealth v. Mangel, 2018 PA Super 57 (2018). In this case, the court was tasked with determining what proof was required to authenticate “social media evidence, such as Facebook postings and communications.”

In reaching its decision, the Court reviewed Pennsylvania appellate court cases that addressed the level of proof needed to authenticate other types of electronic evidence, such as text messages and emails. The Court acknowledged that although social media information is similar to other electronic evidence, it also poses unique challenges “because of the great ease with which a social media account may be falsified, or a legitimate account may be accessed by an imposter.” For that reason, the authentication process for social media evidence must necessarily address those issues and provide a level of certainty regarding account ownership and authorship issues.

Of course the issue then becomes: What level of certainty is required to sufficiently eradicate any doubts regarding those issues? The prosecution asserted that the trial court applied the incorrect standard in this regard when it considered whether there was a “reasonable degree of certainty, reliability, scientific, technological certainty” that the Commonwealth had satisfied the requirements for authentication of the Facebook records.”

Notably, the Court disagreed with the prosecution, concluding that the trial court applied the correct standard: “(I)t is clear that the trial court…applied the proper standard in determining whether the Commonwealth had presented sufficient direct or circumstantial evidence that Mangel had authored the Facebook messages in question.”

Next, the court clarified how to apply that standard to social media evidence, and provided guidance for lawyers seeking to authenticate social media postings: “Initially, authentication…(of) social media evidence is to be evaluated on a case-by-case basis to determine whether or not there has been an adequate foundational showing of its relevance and authenticity…Additionally, the proponent of social media evidence must present direct or circumstantial evidence that tends to corroborate the identity of the author of the communication in question, such as testimony from the person who sent or received the communication, or contextual clues in the communication tending to reveal the identity of the sender.”

Finally, the Court applied that standard to the case at hand, upholding the trial court’s determination that the prosecution failed to properly authenticate the social media evidence at issue: “(T)he Commonwealth presented no evidence, direct or circumstantial, tending to substantiate that Mangel created the Facebook account in question, authored the chat messages, or posted the photograph of bloody hands. The mere fact that the Facebook account in question bore Mangel’s name, hometown and high school was insufficient to authenticate the online and mobile device chat messages as having been authored by Mangel. Moreover, there were no contextual clues in the chat messages that identified Mangel as the sender of the messages.”

So, whether you practice in Pennsylvania or elsewhere, the guidance provided by the Court in this case is instructive. If your client’s case hinges on a particular piece of evidence obtained online, the more proof you can offer to establish the identity of the person responsible for creating the online posting, the better. A multi-faceted approach to establishing authorship is advisable rather than relying on forensic or contextual evidence alone. Certainly forensic evidence alone will be enough in some cases, but not all - and as I always say, better safe than sorry.

Nicole Black is a Rochester, New York attorney, author, journalist, and the Legal Technology Evangelist at MyCase  law practice management software. She is the author of the ABA book Cloud Computing for Lawyers, co-authors the ABA book Social Media for Lawyers: the Next Frontier, and co-authors Criminal Law in New York, a Thomson Reuters treatise. She writes legal technology columns for Above the Law and ABA Journal and speaks regularly at conferences regarding the intersection of law and technology. She can be reached at niki.black@mycase.com.


Fourth Amendment ramifications of Facebook “searches” by police

Stacked3Here is a recent Daily Record column. My past Daily Record articles can be accessed here.

*****

Fourth Amendment ramifications of Facebook “searches” by police

I’ve written extensively in the past about the ethical obligations of lawyers who seek to obtain evidence using social media. The specific issues addressed in ]that context are irrelevant for the purposes of this column, but of note is that all of the ethical opinions on the topic of lawyers mining social media for evidence differentiate between publicly available information and that which is only accessible behind a privacy wall. In other words, the rules are different when lawyers or their agents seek to connect with someone online via a social network in order to view posts that can only be viewed by a person’s connections or “friends.”

But what happens when law enforcement officers seek to do the same thing - obtain social media evidence that can only be accessed behind a privacy wall? One of the more interesting issues to consider is whether the conduct constitutes a search, and if so, does “friending” someone in order to view information behind a privacy wall - in the absence of a warrant - violate the Fourth Amendment?

That very issue was addressed in Everett v. Delaware, No. 257, 2017. The question asked of the court was: “When a person voluntarily accepts a “friend” request on Facebook from an undercover police officer, and then exposes incriminating evidence, does the Fourth Amendment protect against this mistaken trust?”

In this case, a police detective created a fake Facebook profile and eventually sent the defendant a “friend” request, which was accepted. The detective then monitored the defendant’s Facebook account for 2 years, viewing it 1 to 3 times per week. The defendant had a number of violent felony convictions and was thus unable to possess firearms. Shortly after he posted a photo to Facebook that included firearms, among other items, the detective applied for a warrant to search the defendant’s home, which was granted. The subsequent search resulted in evidence that was later used to prosecute the defendant for numerous felonies. The defendant was convicted after trial and this appeal challenging the constitutionality of the original search of his home was filed.

In reaching its decision, the Court applied a 2-step inquiry. Its first task was to ascertain whether the Facebook monitoring violated the Fourth Amendment or Article I, Section 6 of the Delaware Constitution. If so, then its remaining task was to, after removing the tainted evidence from the warrant affidavit, determine whether the information remaining provided a neutral magistrate with probable cause to issue a search warrant.

The Court did not reach the second step of the inquiry since it concluded that the defendant did not have a reasonable expectation of privacy when he shared information with people that he chose to make his Facebook friends. The Court explained that the defendant “assumed the risk” that one of his “friends” might be an undercover officer:

“(T)he Fourth Amendment does not guard against the risk that the person from whom one accepts a ‘friend request’ and to whom one voluntary disclosed such information might turn out to be an undercover officer or a ‘false friend.’ One cannot reasonably believe that such ‘false friends’ will not disclose incriminating statements or information to law enforcement—and acts under the risk that one such person might actually be an undercover government agent. And thus, one does not have a reasonable expectation of privacy in incriminating information shared with them because that is not an expectation that the United States Supreme Court has said that society is prepared to recognize as reasonable.”

In other words, caveat emptor: social media-users beware. The lesson to be learned is to only share information with your online “friends” that you would readily share with a law enforcement officer. After all, as I always say, better safe than sorry!

Nicole Black is a Rochester, New York attorney, author, journalist, and the Legal Technology Evangelist at MyCase  law practice management software. She is the author of the ABA book Cloud Computing for Lawyers, co-authors the ABA book Social Media for Lawyers: the Next Frontier, and co-authors Criminal Law in New York, a Thomson Reuters treatise. She writes legal technology columns for Above the Law and ABA Journal and speaks regularly at conferences regarding the intersection of law and technology. She can be reached at niki.black@mycase.com.


Can consent to search be obtained via Google Translate?

Stacked3Here is a recent Daily Record column. My past Daily Record articles can be accessed here.

*****

Can consent to search be obtained via Google Translate?

Technological advances over the past decade have occurred at an unprecedented rate. As a result, there have been drastic improvements in machine learning and artificial intelligence technologies in recent years, making many science fiction fantasies a newfound reality. A great example of this is Google Translate, a tool that instantly translates speech.

Within the last few years, Google Translate has become widely available as a free online and mobile app and provides the immediate ability to translate words, both spoken and written, from one language to another. Because it’s so easily accessible, it should come as no surprise that it was recently used by law enforcement to interact with a suspect, resulting in a case that addressed an interesting constitutional question. Specifically, earlier this month, in U.S. v. Cruz-Zamora, the United States Court for the District of Kansas considered the issue of whether a non-English speaking individual can consent via Google Translate to a search of his car by law enforcement.

The case arose from a traffic stop which was initiated because of the defendant’s suspended registration. At the beginning of the encounter, the officer realized that the defendant spoke very little English. He then moved the defendant to his patrol vehicle and began to communicate with him using Google Translate via his car’s laptop. While speaking to him using Google Translate, the defendant allegedly gave the officer permission to search his vehicle, which the officer did, leading to the discovery of illegal drugs.

The defendant later alleged that the search was unconstitutional. During the suppression hearing, the officer admitted that a live translator would have been preferable but none were available. He also admitted that the defendant didn’t always understand his questions.

Two professional interpreters also testified at the hearing, and after reviewing the video and audio recordings of the encounter, both opined that it was clear that the defendant was often confused when responding to questions and didn’t always seem to understand what was being asked of him. They also testified that Google Translate failed to take context into consideration and thus “should only be used for literal word-for-word translations.”

In its opinion, the Court initially explained that it was the defendant’s contention that “any evidence obtained as a result of the car search should be suppressed because he did not understand (the officer) and therefore could not knowingly consent to the search.”

Next, the court determined, based primarily on the testimony of the professional interpreters, that “while it might be reasonable for an officer to use Google Translate to gather basic information such as the defendant’s name or where the defendant was travelling (sic), the court does not believe it is reasonable to rely on the service to obtain consent to an otherwise illegal search.”

The Court explained that although the audio and video recordings of the encounter showed that the defendant had a basic understanding of the questions asked of him, the testimony of the interpreters and a review of the transcript indicated that the defendant’s purported consent to search was invalid. The Court concluded that it did “not find the government ha(d) met its burden to show defendant’s consent was ‘unequivocal and specific and freely and intelligently given.’’

Next the court turned to an alternative argument made by the government: that the good faith exception applied, and thus the evidence should not be suppressed. Specifically, the government contended that the officer acted in good faith since he reasonably relied on Google Translate and its translations. In opposition, the defendant asserted that the officer could not “reasonably rely on a mistake of his own making.”

The Court agreed with the defense, and excluded the evidence:

“(T)he good-faith exception does not apply as it is not reasonable for an officer to use and rely on Google Translate to obtain consent to a warrantless search, especially when an officer has other options for more reliable translations. The government has not met its burden to show defendant’s consent was “unequivocal and specific and freely and intelligently given,”…and the court will not interpret defendant’s compliance with Wolting’s instructions to stand by the side of the road during the search as implied consent, considering the totality of the circumstances. The court finds that application of the exclusionary rule is appropriate in this case, and therefore grants defendant’s motion to suppress.”

The lesson to be learned is that while the technology has dramatically improved in recent years, it’s often far from perfect. Tools like Google Translate are improving by leaps and bounds, but it is ill-advised to indiscriminately relying on them when comprehension is crucial and carries legal ramifications. Technology is not a panacea; it merely supplements hard-earned technical skills and expertise - it doesn’t replace them.

 

Nicole Black is a Rochester, New York attorney, author, journalist, and the Legal Technology Evangelist at MyCase  law practice management software. She is the author of the ABA book Cloud Computing for Lawyers, co-authors the ABA book Social Media for Lawyers: the Next Frontier, and co-authors Criminal Law in New York, a Thomson Reuters treatise. She writes legal technology columns for Above the Law and ABA Journal and speaks regularly at conferences regarding the intersection of law and technology. She can be reached at niki.black@mycase.com.


Round Up: Legal Beach Reads, Alternative Fees, Cybersecurity, And More

SpiralI often write articles and blog posts for other outlets and am going to post a round up here from time to time (but won't include my weekly Daily Record articles in the round up since I re-publish them to this blog in full). Here are my posts and articles from May 2018:

 


Maine Bar on use of social media evidence for litigation

Stacked3Here is a recent Daily Record column. My past Daily Record articles can be accessed here.

*****

Maine Bar on use of social media evidence for litigation

The phenomenon of social media has infiltrated all aspects of our lives, so it’s not surprise that social media evidence is now a pivotal tool in litigation. Juror use of social media has resulted in mistrials across the country for more than a decade now. And trial attorneys are increasingly mining social media for evidence and researching jurors online.

Not surprisingly, the rising practice of using social media information during litigation caught the attention of ethics committees some years ago, and the first opinion on this issue was handed down in 2009. Since then, I’ve regularly covered these opinions in this column, and recently realized that I’d overlooked one that was issued by the Maine Bar’s Professional Ethics Commission last November: Opinion 217 

In the opinion, the Commission addressed both the ethics of mining social media for evidence and researching jurors online. Another issue covered was whether lawyers may connect with judges or quasi-judicial officers on social media sites.

At the outset, the Commission acknowledged that defining social media is a difficult task, since “(t)he functionality, technology and content available on the platforms that make up “social media” likely will continue to evolve dramatically in the future.” Even so, it attempted to offer a rather broad definition, defining social media networks as sites that “are used primarily for connecting socially with multiple ‘friends’ and for sharing a wide range of personal, professional and editorial information using text, links, photographs and video,” while specifically excluding sites that “lack the type of sharing of non-public information with ‘friends’ selected by the profile holder, which characterizes social media platforms.”

Next the Commission turned to using social media to obtain evidence for a pending case. The Commission sided with the majority of other jurisdictions in concluding that all publicly viewable social media information is fair game and may be viewed without issue. But for unrepresented parties, data found behind a privacy wall may only be accessed if attorneys or their agents, when making the connection request, “affirmatively disclose the purpose of the contact.” Represented parties were a different story, and all private information found behind the privacy wall was found to be off limits since any attempt to connect in order to access that information constitute impermissible communications with a represented party.

The Commission also sided with the majority of jurisdictions on the issue of whether passive notifications (like those sent by LinkedIn when someone views a user’s profile) sent by social networks to jurors constituted an impermissible communication. Like the American Bar Association Committee and the DC Bar Committee (and in contrast to the position taken by the New York State Bar Committee), the Commission concluded that only publicly viewable information could be accessed and that passive notifications to jurors sent by social media sites did not constitute impermissible ex parte communications since “any other approach would be unworkable as a practical matter and would subject attorneys to potential ethics violations based upon the happenstance of user settings or new technologies that generate automated messages outside of the attorney’s reasonable knowledge or control.” However, the Commission cautioned that “where an attorney knows or reasonably should know that accessing any social media of a juror will result in such juror becoming aware of the attorney’s access, the attorney should refrain from accessing that social media, (and) (i)f the attorney learns that any juror…has become aware of (it), the attorney must notify the Court…(which) may find it advisable to provide a cautionary instruction…”

Finally the Commission weighed in on lawyers connecting with judges online: “Attorneys are permitted to connect with judges and other judicial officers through social media, but they are precluded from having ex parte communications with, or from attempting to impermissibly influence, such judges or judicial officers through social media.” Once again, this was in line with the position taking by most other jurisdictions on this issue.

As more jurisdictions address these issues, commonalities arise in the analysis and conclusions reached. In this case, the Commission wisely acknowledged the rapid pace of technological advancement and incorporated that concept into the context of its determinations. Hopefully committees in jurisdictions that have not yet addressed these issues will follow suit, since guidelines on ethical use of technology that have flexibility built in are more likely to withstand the test of time.

Nicole Black is a Rochester, New York attorney, author, journalist, and the Legal Technology Evangelist at MyCase  law practice management software. She is the author of the ABA book Cloud Computing for Lawyers, co-authors the ABA book Social Media for Lawyers: the Next Frontier, and co-authors Criminal Law in New York, a Thomson Reuters treatise. She writes legal technology columns for Above the Law and ABA Journal and speaks regularly at conferences regarding the intersection of law and technology. She can be reached at niki.black@mycase.com.


North Carolina Bar considers requiring technology CLE credits

Stacked3Here is a recent Daily Record column. My past Daily Record articles can be accessed here.

*****

North Carolina Bar considers requiring technology CLE credits

I’ve said it once, and I’ll say it again: technology is advancing at unprecedented rates. The impact of technology on our day-to-day lives is inescapable and the practice of law is not immune. Technology’s reach can be felt across the legal spectrum, from the use of digital evidence in the courtroom and ediscovery, to using artificial intelligence and cloud computing software to streamline law firms and the practice of law.

That’s why, in response to the rapid pace of technological change, the American Bar Association adopted of an amendment to Comment 8 to Model Rule 1.1 in 2012. The comment imposed an ethical duty on lawyers to stay abreast of changes in technology. The amended comment reads as follows:

Maintaining Competence

To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology, engage in continuing study and education and comply with all continuing legal education requirements to which the lawyer is subject. (Emphasis added).

Following the enactment of this amendment, 31 states have since adopted the revised comment to Rule 1.1, including New York, which did so in March of 2015.

Then in 2016, Florida became the first state to require that lawyers complete 3 credits of legal technology CLE per biennial cycle as part of their obligation to stay abreast of changes in technology.

Since then, no other state showed signs of following that course - until now. Last month, the North Carolina State Bar Council voted to adopt proposed amendments relating to the duty of technology competence. The proposed rules provide a definition of “technology training” and mandate “that one of the 12 hours of approved CLE required annually must be devoted to technology training.” If adopted, the amendments will go into effect in 2019.

The Council defined “technology training” as follows: “(T)he primary objective of the program must be to increase the participant’s professional competence and proficiency as a lawyer. Such programs include, but are not limited to, education on the following: a) an IT tool, process, or methodology designed to perform tasks that are specific or uniquely suited to the practice of law; b) using a generic IT tool process or methodology to increase the efficiency of performing tasks necessary to the practice of law; c) the investigation, collection, and introduction of social media evidence; d) e-discovery; e) electronic filing of legal documents; f) digital forensics for legal investigation or litigation; and g) practice management software.”

The Committee also provided clarification regarding the types of CLEs that will qualify for a technology credit  - and which ones would not: “A program on the selection of an IT information technology (IT) product, device, platform, application, web-based technology, or other technology tool, process, or methodology, or the use of an IT tool, process, or methodology to enhance a lawyer’s proficiency as a lawyer or to improve law office management may be accredited… A program that provides general instruction on an IT tool, process, or methodology but does not include instruction on the practical application of the IT tool, process, or methodology to the practice of law shall not be accredited. The following are illustrative, non-exclusive examples of subject matter that will NOT receive CLE credit: generic education on how to use a tablet computer, laptop computer, or smart phone; training courses on Microsoft Office, Excel, Access, Word, Adobe, etc., programs; and instruction in the use of a particular desktop or mobile operating system.”

Will other states follow suit and mandate technology CLEs for lawyers? Only time will tell. But all signs point to this being the prudent course of action. After all, technology is here to stay and ignoring it is no longer an option. Lawyers need to stay up-to-date and a helpful nudge in the right direction by state bar associations may be the best solution for those attorneys who are unwilling to undertake this task on their own.

 

Nicole Black is a Rochester, New York attorney, author, journalist, and the Legal Technology Evangelist at MyCase  law practice management software. She is the author of the ABA book Cloud Computing for Lawyers, co-authors the ABA book Social Media for Lawyers: the Next Frontier, and co-authors Criminal Law in New York, a Thomson Reuters treatise. She writes legal technology columns for Above the Law and ABA Journal and speaks regularly at conferences regarding the intersection of law and technology. She can be reached at niki.black@mycase.com.


Will Robots Replace Lawyers?

Stacked3Here is this week's Daily Record column. My past Daily Record articles can be accessed here.

*****

Over the past year, there’s been a lot of talk about artificial intelligence (AI) and its potential, both negative and positive. Some tout an idyllic world where robots cater to the every want and need of humans. Others, like Elon Musk, take a more guarded approach and warn of a world where machines gain sentience and threaten humanity.

Philosophical issues aside, AI remains in its infancy, but already shows great promise. You need look no further than self-driving cars for proof of that.

But what does it mean for the legal industry? How will AI impact the practice of law and will robot lawyers soon become a reality, thereby eradicating the need for human lawyers? The short answer: AI won’t replace lawyers, but it will automate the more mundane aspects of practicing law, allowing lawyers to focus on more interesting, high level analytical tasks.

Not convinced? Consider the results of the 2016 Deloitte study, “Developing Legal Talent: Stepping Into the Future Law Firm." The central thesis of this report is that by 2020, the practice of law will be dramatically different than it is today, in large part due to the effects of technological change, with AI playing a large part. 

For starters, one of the conclusions was that 114,000 jobs in the legal sector will become automated within the next 20 years. And, according to the report, automation has already resulted in the reduction of 31,000 jobs in the industry, mostly in administrative roles. For lawyers, those most at risk are predicted to be entry level attorneys, and highly skilled lawyers will be safe from the reductions. The report indicates that demand for highly skilled lawyers will increase to 25,000 more by 2020. 

Specifically, over the next 10 years, it is predicted that the following changes will likely occur because of changes in technology:

  • Fewer traditional lawyers in law firms
  • A new mix of skills among the elite lawyers
  • Greater flexibility and mobility within the industry
  • A reformed workforce structure and alternative progression routes
  • A greater willingness to source people from other industries with non-traditional skills and training.

And it’s not just transactional law that will be affected. Litigation practices will also feel the touch of AI. Look no further than the news from last week that Ogletree, Deakins, Nash, Smoak & Stewart, a labor and employment-focused Am Law 100 firm, now uses LegalMation, AI software that works with IBM’s Watson technology to draft an answer to a complaint. With this software users drag and drop a PDF of the complaint into the platform and designate a practice area. The software then drafts an answer to the complaint, which it provides within approximately 2 minutes.

Ready or not, AI is here. AI and the automation of much of the mundane aspects of law practice will undoubtedly have a tremendous impact on the practice of law - and much earlier than you might think. So it’s worth learning about how it might affect our profession so that you can take steps to position your practice and your firm to take advantage of the changes, rather than be displaced by them. 

Mark my words: AI will undoubtedly change the legal profession. You can either resist its impact to your detriment, or take steps to acclimate and use it to your advantage. The choice is yours.

Nicole Black is a Rochester, New York attorney, author, journalist, and the Legal Technology Evangelist at MyCase  law practice management software. She is the author of the ABA book Cloud Computing for Lawyers, co-authors the ABA book Social Media for Lawyers: the Next Frontier, and co-authors Criminal Law in New York, a Thomson Reuters treatise. She writes legal technology columns for Above the Law and ABA Journal and speaks regularly at conferences regarding the intersection of law and technology. She can be reached at niki.black@mycase.com.


Round Up: Time-tracking Software, Legal Beach Reads, Artificial Intelligence, and More

RoundupI often write articles and blog posts for other outlets and am going to post a round up here from time to time (but won't include my weekly Daily Record articles in the round up since I re-publish them to this blog in full). Here are my posts and articles from April 2018: