Artificial Intelligence, Algorithmic Knowledge and the Future of Law Schools

Written by: Christian B. Sundquist, Professor of Law at Albany Law School

When thinking about the future of law schools, the unceasing technologicalization of legal practice and education, embodied in part by law-based artificial intelligence and the emergence of “lawyer-bots,” deserves critical analysis. A common fear, articulated by many, is that looming improvements in artificial intelligence will render the majority of traditional legal jobs obsolete, thus dramatically calling into question both the existence and traditional mission of law schools.

Indeed, it is well known that law firms and lawyers across the world have begun integrating law-based artificial intelligence systems into their practice to improve efficiency and the delivery of lower-cost basic legal services to clients. IBM’s “ROSS” AI system, touted as “the world’s first artificially intelligent attorney,” was developed to provide answers to legal questions (culled from its analysis of legal databases) and monitor case law and other developments. E-Discovery technology has advanced to the point where an AI system such as COIN (a JP Morgan software program) can perform (in mere seconds) document reviews of complex matters that used to require 360,000 human hours. Online legal services companies, such as LegalZoom, provide outsourcing of basic legal drafting and other tasks. A recent study demonstrated that a law-based AI system was able to more quickly and accurately identify potential legal issues in a series of non-disclosure agreements than seasoned contract attorneys. And automated “lawyer-bots,” such as DoNotPay, which help people sue Credit Reporting agencies and contest parking tickets, are quickly proliferating.

The practice and education of law, of course, has long been impacted by emerging technologies that have changed how lawyers approach certain tasks. For example, the development of efficient online legal research tools (such as Westlaw and Lexis) dramatically reduced the number of billable hours a lawyer spends on researching basic legal issues (and, perhaps, the nature of legal research itself). Similarly, the advancement of e-Discovery methods over the last decade (coupled with technological improvements that expanded the controversial outsourcing of low-level legal tasks) have largely eliminated the need for junior lawyers to devote a significant portion of their time reviewing sensitive documents by hand.

Such emerging technologies can be seen as eliminating the need for lawyers to perform certain tasks and services (such as basic legal research, the drafting of basic legal instruments, etc.), and thus negatively impacting legal employment opportunities. However, the same technology can be embraced as improving the efficiency and cost of legal practice, while (perhaps) expanding access to justice and allowing lawyers to devote more energy to complex legal and analytical issues.

We cannot stop the march of the lawyer-bots, but we can do our best to prepare students for the newly emerging techno-legal landscape. Whereas the traditional legal model was based on the transmission of information and descriptive knowledge (which has now largely been displaced by technology), the new legal model must be based on critical analysis, creative problem-solving and emotive client-based lawyering (which cannot yet be so easily replaced by “narrow” systems of artificial intelligence). The law schools of the future (today) will need to ensure that students are being prepared to:

  • engage in high-level critical analysis (such as the ability to develop, understand and articulate policy arguments), engage in complex oral and written advocacy, and appreciate theoretical (jurisprudential) explications of the law;
  • provide creative solutions to complicated legal problems (such as providing individualized advice to clients and engaging in interdisciplinary group problem-solving activities); and
  • provide emotive client-focused representation (such as by further developing professionalism and negotiation skills and enabling students to interact with a diverse range of persons).

The compiling and interpretation of legal information by new technology, nonetheless, is still subject to potential coding bias in the algorithms and assumptions that underlie law-based artificial intelligence systems. Much has been written about such machine-learning bias and how the production of algorithmic knowledge can replicate existing patterns of social inequality (by reinforcing gender and racial stereotypes). For example, the emergence of predictive policing models (such as PredPol, used by law enforcement to identify the likelihood of future criminal activity) and predictive risk assessment software (where judges around the country are beginning to use such algorithmic knowledge to determine criminal sentencing based on the likelihood that a person will commit a future crime) have been heavily criticized on privacy and racial justice grounds. As such, law schools also owe their students a duty to help them identify and critically interrogate the core assumptions that foster the development of such algorithmic knowledge, while enabling students to work closely with AI programmers to develop and implement future legal technology

 

 

 

%d bloggers like this: