top of page

Comments on Technology and the Challenge to Legal Education

  • leelawprof
  • Nov 7, 2022
  • 16 min read

Updated: Nov 8, 2022


Legal education faces challenging questions raised by information and communications technologies (ICT). Some are immediate and others more remote, but the underlying issues are profound and lasting. Law is directly influenced by AI technology, as both the subject of change and the agent for achieving change. As AI is deployed in the legal services industry, it is reshaping the legal profession into a knowledge industry. At the heart of this transformation is the ability of AI to treat the legal texts and recorded words of legal practice as data that can be mined for insights by powerful pattern-detecting artificial intelligence. That is to say, lawyers can now refine their arguments by detecting patterns that have been successful in the past. These patterns, detected through high-powered machine learning, can reveal subtle patterns that cannot be detected without the ad of machine learning. For example, AI can detect a judge's preferred language on a given issue, anticipate a juror's reaction to a presentation of evidence, assess the mood and veracity of a witness, or even draft the legal argument for a pre-trial motion. Moreover, AI is a tool for managing a law practice. It plays a growing role in maximizing the income of a law firm by accurately valuing and assessing risk for legal matters. AI can help to analyze firm composition, and determine the right wages and incentives to attract and retain top talent, and it can be used to monitor the efficiency and efficacy of individual lawyers or staff. Data is driving law practice as more lawyers become aware of its power to improve overall operational efficiency and to improve the quality of their clients.

Law, however, also has agency in this revolution. AI holds vast potential for good and ill, and therefore, it must be carefully regulated. Shannon Vallor, an AI ethicist, argues that AI applications typically create consequences that are difficult to know in advance and are far-reaching in their scope.This means that they require constant study and evaluation. Their impact on human rights and, critically, on the quality of life that they promote, must be closely considered because even well-intended seemingly harmless technologies can have a significant impact on society. Consider, for example, how Facebook, which was intended as a way for friends to stay in touch with one another, has morphed into a key element in what Soshona Zurbric calls "surveillance capitalism" or the trade in private information, which is altering the processes of democratic discourse and transforming traditional legal concepts of public and private. Nick Couldery and Ulises A. Mejias argue, in their book, "The Costs of Connection," that this move toward commodifying the data of persons is a continuation of the colonialist history of exploitation, where now all aspects of private life are viewed as resources of production. “Data is the new oil,” is a common expression of this new frontier of commerce.

The application of AI in legal technology should be particularly concerning, given the foundational role that law plays in democracy. In a recent book, Open Democracy, French scholar Hélène Landemore, describes the challenge posed by technology as the recognition that civic republicanism is a system based on "the people's consent to power, rather than the people's exercise of power."[1] The rise of populism, she argues, is a response to this new awareness which has revealed that representative democracy has privileged ruling elites and entrenched a ruling class. This is true in law as well as political theory. The scope of the change is indicated by recent developments in the philosophy of law, which seek to develop theoretical accounts of law as a generalized human artifact. Simply put, the law today is being understood in new ways that require computer modeling and rethinking the nature of law as information of the sort that is capable of being used by a digital computer. This means legal practice must be re-conceived for an age when algorithms and networks run the world.

One of the greatest challenges for legal education today is to develop a deep understanding of artificial intelligence since it is raising a call to return to the values of liberal education. AI has a two-fold impact: one techno-social, and the other humanistic. The techno-social dimension is easily ascertained; we are at the beginning of a revolution that is changing the way we live, work and relate to one another. Klaus Schaub, the founder of the World Economic Forum, argues in his book, The Fourth Industrial Revolution,[2] that technological innovations have driven economic and social change, radically transforming society. Foundational work in applied ethics, which informs law and regulation, is urgently needed to comprehend the technical issues presented by the new technologies. There is nothing new in this. For example, the development of coal-fueled steam engines freed human beings from the limitations of animal power. Gasoline and electricity extended and refined that freedom. In Competing in the Age of AI, Harvard Business School professors Marco Iansiti and Karmin R. Lakhani explain,

As digital technology increasingly shapes 'all of what we do and enables a rapidly growing number of tasks and processes, AI is becoming the new operational foundation of business—the core of a company's operating model, defining how the company drives the execution of tasks. AI is not only displacing human activity; it is changing the very concept of the firm.[3].

They claim that "no field of human endeavor will remain independent of artificial intelligence." This opinion is widely shared among AI experts and social theorists who study technological change. As a result, we live in a time of great promise and great peril. The world has the potential to connect billions of people through digital networks, dramatically improve the efficiency of organizations, and manage assets in ways that can help regenerate the natural environment, potentially undoing the damage of previous industrial revolutions.

As Iansiti and Lakhani suggest, the impact of AI is not limited to natural scientists and technologists. The AI revolution is also unprecedented in its challenge to the norms of ethical reasoning itself. In his book, The Fourth Revolution,[4] Oxford philosopher Luciano Floridi argues that the current age should be likened to a Copernican revolution because breakthroughs in the information sciences have displaced the human being from any claim of uniqueness or superiority, and revealed that we are nothing other than information, just like every other creature. Humans differ in degree of intelligence from other animals, but not in kind. And, eventually, we can imagine artificial intelligence will surpass human reasoning in most specific tasks. Even while AI is creating opportunities and raising challenges that must be managed by natural and social scientists with great technical skill and knowledge, it also requires new engagement with what Robert Maynard Hutchins memorably called the "Great Conversation" of the humanities.[5]

At issue from Hutchins’s perspective is the fact that on many levels, AI challenges the way human beings understand themselves and their place in creation. Since the general goal of AI is to mimic human intelligence, it requires a deep and detailed understanding of how the human mind works. It also rests on the profound transformation of fundamental understanding of the nature of existence itself (what philosophers call the "ontological question"). Information science has displaced the human brain as the sole possessor of reason, and in fact, has shown it to be a common natural phenomenon. It has also shown that information itself is a fundamental component of existence, even at the quantum level. Together, these two insights challenge the traditional understanding of what is sometimes called 'moral anthropology," or the self-understanding of the moral nature of human beings. For this reason, once again, the humanities are on the agenda since the question, 'what is the meaning of human life?' is put at issue.

Hutchins's Great Conversation is the educational goal of encouraging students to see themselves as inheritors of the intellectual history of learning and thought of the significant questions of human value and meaning. They are the benefactors of ancient Greek thought on virtue from Plato and Aristotle; on Christianity from figures like Augustine of Hippo, Dante, and the medieval scholastics; of modern skeptical philosophes like Descartes, Kant, Hegel, Freud, and Darwin; and social scientists like Weber, Levi Strauss, and Robert Bellah. They are latecomers to this great conversation, but they, too, have something to contribute. The ability to think clearly about the Great Conversation and to take part for oneself goes to the essence of the concept of liberal arts education. As Hutchins described it:

The aim of liberal education is human excellence, both private and public (for man is a political animal). Its object is the excellence of man as man and man as a citizen. It regards man as an end, not as a means; and it regards the ends of life and not the means to it. For this reason, it is the education of free men. Other types of education or training treat men as means to some other end or are at best concerned with the means of life, with earning a living, and not with its ends.[6]

Hutchins believed that the goal of liberal education is the creation of an educated person who “comprehends the ideas that are relevant to the basic problems and that operate in the basic fields of subject matter.” Liberal learning applied to the field of law, therefore, seeks to identify the fundamental questions that are foundational to American law, in the belief that knowing and understanding these fundamental questions is essential to the liberty of the student and of society. Today, AI is challenging the foundations of the Great Conversation, calling on the traditions of human thought to be refreshed and reconfigured. In its broadest sense, the Innovation Institute is thus at the vanguard of epic change in how humanity understands itself and its place in the moral order of the universe.

A similar view is advanced at Oxford University's new Institute for Ethics in AI, which was opened a few months ago. Speaking about its mission, Professor Sir Nigel Shadbolt, Chair of the Steering Group of the Institute for AI Ethics, observed that their Institute "just doesn't expect the technologists to come up with the ethical answers, or the computer scientist to work out the most creative and valuable ways their technology might be used. This is bridging that two-culture divide that CP Snow often talked about. How do we integrate an arts and humanities-based view of the world with our scientific outlook? That is what makes this a unique opportunity." Oxford chose philosopher John Tasioulas to be the founding director of their Institute. In his opening address, he stated,

AI will continue to have transformative effects on many parts of life, from medicine to law to how we do democracy. I do not want AI ethics to be seen as a narrow specialism, but to become something that anyone seriously concerned with the major challenges confronting humanity has to address. AI ethics is not an optional extra or a luxury, it is absolutely necessary if AI is to advance human flourishing.[7]

Ultimately, this is a task that requires insight from the humanities—particularly from philosophy and theology—since it engages questions about human meaning, the nature of mind, knowledge, and being itself, as core issues in understanding, developing, and wisely implementing the tools that will recreate society and reorient human understanding. As Tasioulas explains:

Science can tell us the consequences of our actions but it does not tell us which goals we should pursue or what sacrifices are justified to achieve them. In so far as we are going to have AI as part of the technological solution to societal challenges, we inevitably have to address the ethical questions too. AI ethics is a way to get clearer about the value judgments involved and to encourage a more rigorous and inclusive debate.[8]

While most law schools are far from Oxford in terms of funding and stature, the goals for it should be informed by the same concerns for understanding how AI is challenging moral understanding and raising new questions about what it means to live well in the world.

The AI revolution is calling on universities to modernize by changing the way they view themselves and their work. Critically, online teaching technology is now deployed, and following the pandemic, will be more widely accepted. And there have been calls for more engagement with the community, to leverage technology to provide broader access to under-represented communities, improve student success, work more closely with government departments and agencies, partner with corporations, and participate directly in economic development efforts. And all of this was done while expanding enrollment and funding sources.

These new demands create a substantial problem for many law schools that need to maintain high bar exam passage rates while responding to a rapidly changing social and legal environment. These goals can conflict, as the demands for innovation can require rethinking traditional assumptions of legal education. This means that institutional structures and leaders among the faculty who have kept the law school on course become obstacles to change. While some law schools have responded to this by creating new programs and new administrative leadership to create and support innovation within the law school, the results have been mixed. A better and more thoroughgoing understanding of the nature and scope of the changes brought about by ICT is necessary. Achieving such an understanding requires a sustained focus on information science because it is the science of information that is transforming the law, as it has transformed many other disciplines, To understand the significance of this radical transformation for law we need to briefly make three points.


Information

The first point is about the information itself. In the late 1940s, the American scientist, Claude Shannon, and the Brit, Alan Turing, discovered mathematical descriptions of information and computation. By separating the mathematical logic from the human experience of these phenomena, they were able to prove that information and computation are in fact common in nature. For example, aspects of physics can benefit from being described in terms of Shannon information, and even a simple cellular lifeform can be said to compute. But, also by separating information from meaning, they created one of our current dilemmas--we have a flood of information, but very little of it has any meaning for us.


Complexity

The second point has to do with complexity. Beginning in the 1990s the social sciences have been transformed by the belief that society is a complex system. Here, what is meant by complex is not the same as "complicated," but rather a particular kind of system that has mathematically definable properties similar to a living, evolving organism. Through the use of Big Data and machine learning, these types of systems have been found to explain various particular aspects of society. Motivated by these developments, social scientists have developed a theoretical approach called New Materialism, which is an interdisciplinary, theoretical, and politically committed field of inquiry. It is a part of a new turn in social thought that has consequences for thinking about how society is controlled--and thus holds implications for thinking about the nature of law. Simply put, we now have data to model society, and what we have learned from doing this is that society cannot be understood by reducing it to the rules and principles of any single discipline. It requires thick descriptions of the dense details of history and lived experience if it is to be meaningful to us.


Demand for Rules

The third concept is the demand for rules. Gillian Hadfield, a professor at the University of Toronto with expertise in Law and Economics, suggests that the rapid growth in complexity in society has created a demand for more rules and that at the current rate of growth in complexity, the legal systems in our democracy can no longer supply enough rules to satisfy the demand. Hadfield argues that this gap between the supply and demand for rules has led to many tensions within society. The recent experiences of social media suggest that the motto "Move fast and break things" may be a battle cry to create new innovations so quickly that they evade regulation. And many of the blockchain-based use cases serve the desire for private law or law-like private ordering. Smart contracts, for example, are not legal relationships per se; they are alternatives to the law that are intended to avoid the centralized authorities of the legal establishment. They are likely to take on more and more of the private regulatory role--as an alternative to the law--in the hyper-connected world. This means that our information society is rapidly generating new sources of order from data that has no meaning apart from the lived experiences of human beings that can render it meaningful.


The Legal Profession

These three points suggest that radical changes will be coming at an increasing pace. Various perspectives on the ethics of technology from the phenomenological tradition pose warnings. For example, Martin Heidegger famously argued that technology tends to thin out moral reasoning by turning everything into an opportunity for material gain. The result is a rush to innovate for the sake of efficiency, in which the use of things, people, and even one's own life, has meaning only as a means for achieving a material gain. Hannah Arendt extended Heidegger's argument by claiming that the banality of thinned-out moral reasoning leads to totalitarianism. She saw banality in Rudolf Eichmann when he was tried for the war crimes he committed as the mastermind of the Holocaust. He knew the slogans and arguments of fascism, but could not understand the moral meaning of his actions.

This kind of banal thinking, the thinning out of moral reasoning, happens in legal education too. The great legal realist, Karl Llewellyn, wrote in his brief collection of essays, the Bramble Bush, that "one cannot make robot lawyers, and it would be dangerous to try to do so." He was concerned even in the 1950s about creating lawyers who did not understand the moral meaning of their actions. Like, Heidegger and Arendt, Llewelyn warned that scientific-technical reasoning can create a mindset in which even morally heinous acts become thinkable. The problem posed by the age of Big Data is that we need to learn the meaning of information from the experience of human beings in living the lives that generate it. But the techno-scientific reasoning tends to marginalize and devalue the humanities, where the collective experience finds meaningful expression. The legal profession should be thinking about the implications of when, for example, we talk about deploying AI to deliver legal services. It is sad and dangerous that much of what one hears about the ethics of legal technology often lacks depth and moral seriousness. We risk becoming banal like Eichmann was banal. To avoid this we need a richer and better-informed discourse, that is a thick assemblage of details from many disciplines--especially from the humanities. If we do not, then legal technology may end up undermining the moral function of law in our Madisonian liberal democracy.


AI Ethics

The ethics of AI is an important research area for the legal profession, since AI will have significant long-term consequences for the entire human species. The field is developing rapidly, with major research centers being founded at Oxford, the University of Paris, Harvard University, Stanford University, MIT, Notre Dame, and many other major universities. There is a growing need for expertise in the field, as AI penetrates smaller organizations and governmental units. AI ethics review boards are becoming more commonplace, and that trend is likely to grow. AI is the result of breakthroughs in science and mathematics in the twentieth century that culminated by the beginning of the twenty-first century with the discovery of new forms of previously undetectable organization in the vast data sets that could be analyzed by mathematically simulated neural nets. AI poses profound challenges in areas ranging from the philosophy of science to democratic theory, from ontology to virtue ethics, and from archaeology to molecular biology.

Yet, even as AI ethics develops at lightning speeds, the insights that are being revealed suggest a new role for traditional ethical discourse in social thought. Contemporary social science suggests that social systems are evolving dynamic systems characterized by heterogeneity and heterogeneity and contingency of social processes. It suggests that the large patterns that have been developed in twentieth-century social thought should be disaggregated in favor of more nuanced theories that look for the multiple underlying mechanisms, causes, motivations, movements, and contingencies that came together to create higher-level outcomes. Both neoliberal capitalism and Marxism need to be rethought in terms of contingencies and heterogeneity that actually exist in societies. Think description rather than reductive ideologies is more in line with the complex systems analysis that has been revealed by AI. Social research, especially in Social and Political Ethics, needs to focus on the micro- or meso-level processes that combined to create the macro world that interests us without assuming ideologically motivated presuppositions. There are strong similarities to Arendt's concept of "thinking" here. The theory of assemblages, articulated by Manuel DeLanda and his study of Gilles Deleuze, fits this intellectual standpoint very well since it emphasizes contingency and heterogeneity “all the way down.” It develops a sense of the complexity and interconnectedness of factors and causes that are associated with this approach to the social world.


The Need for Inter-religious Dialogue

Questions about the responsible development of AI, the impact that AI might have on human rights and the well-being of society, the impact of AI systems on democratic theory and practice, and even the moral nature of human existence are implicated in the newly detected order that has been revealed by advanced AI systems. Complexity theory in the social sciences suggests that religious belief plays significant roles in shaping the evolution of society, that traditional belief should be given serious consideration, and that ontological and epistemological realism is not "misguided." (See e.g., Paul Cillers, Complexity and Postmodernism). At this juncture, however, awareness of this development is scant among religious ethics and theologians. Nonetheless, there is a societal need to glean the insights of religious traditions to develop a moral understanding of AI as it transforms human society and self-understanding. A few scattered courses offered and programs exist, but there is a growing need for sustained study of AI ethics, complexity theory, and assemblage theory by religion scholars.


The Legal Academy

All of this calls on the legal academy to re-evaluate a fundamental if largely forgotten, part of the description of the work of the lawyer. As it is stated in the preamble to the Model Rules of Professional Responsibility, a lawyer is to be a "public citizen with a special concern for justice." Although this passage is not discussed much, it is critical to the identity of the lawyer as a professional. It calls on lawyers to work for what is good and just for society. And in this way, it is part of the justification for allowing the profession to be self-regulating. In these times of rapid change, it is incumbent on the legal profession to once again be what Tocqueville called the stewards of democracy. That means that responsible legal educators should never say that "the most important people in a law student's education are their future clients." I sometimes hear this bromide being used by legal educators, but it is an irresponsible and ultimately dangerous claim. Some of the political spectacles of the past year involved disbarments for lawyers who forgot that their obligation to protect democracy must take priority over their zeal for representing their clients. The work of the legal educator must be to respond to the growing need for deeper, thicker, and richer interdisciplinary understanding with the goal of nurturing better-formed citizen lawyers. Legal academics, let alone practicing lawyers, have very little understanding of technology or the traditions of humanistic discourse that might make moral sense of it, and yet both are needed to be public citizens.

Law schools should strive to raise the level of discourse around legal technology by building on a foundation that includes philosophy and the humanities more generally. To improve the quality of the deliberative process we must enhance the arguments and insights of experts working in a variety of disciplines on questions of the moral and political meaning of the new technology. This means law schools should draw broadly on many disciplines, but especially moral philosophy, in a radical attempt to bridge the divides between law, science, and the humanities. I do not know of any other period in modern history that is as intellectually ambitious and interdisciplinarity-focused as the present. Change can be painful and inevitable, even as it seems promising and contingent. We are called to face this rising challenge with insight and forethought. Let us face it also with the full range of human thought and experience because the wisdom of humanity lies in understanding the human experience of living true to our highest hopes and principles.


[1] Hélène Landemore, Open Democracy (2020) xvi. [2] Klaus Schaub, The Fourth Age (2017). [3] Marco Iansiti & Karmin R. Lakhani, Competing in the Age of AI, Strategy and Leadership When Algorithms and Networks Run the World (2020), 3. [4] Luciano Floridi, The Fourth Revolution (2014). [5] Robert Maynard Hutchins, The Great Conversation: The Substance of a Liberal Arts Education. Encyclopedia Britannica (1952). [6] Id. [7] Inaugural director and academic team appointed to new Institute for Ethics in AI, News and Events, Oxford University (September 11, 2020). Available at https://www.ox.ac.uk/news/2020-09-11-inaugural-director-and-academic-team-appointed-new-institute-ethics-ai [8] Id.

 
 
 

Comments


bottom of page