By David Bufton, Travers Smith.
The administration of justice is both a serious and consequential function of the State as it can occasion coercive power over individuals and interference with their fundamental rights. Trust in the judicial system, by society, is therefore necessary and any automation of justice will be a very complex undertaking. This article explores, at a high level, how existing AI models could automate alternative dispute resolution (ADR) for commercial disputes and considers some practical and ethical barriers to doing so.
First, what do we mean by ADR? In short, it is any dispute resolution that sits outside of the court system. To give some examples, this can be basic negotiation between the parties, which can diffuse a dispute or more formally ‘settle’ one. There is mediation, where a neutral third party shuttles between the parties to a dispute in an attempt to settle it or at least narrow the issues.
Next there is expert determination, whereby the parties decide to have a neutral third party with relevant expertise in the field (e.g., valuations or other technical areas) decide on a narrow dispute. Unlike negotiation and mediation, expert determination results in a legally binding decision which is imposed on the parties. Arbitration, which is most akin to court litigation (except that the parties to the dispute decide the rules of the game and who the arbitrators will be) also results in a legally binding decision.
It has been argued that AI models will soon be capable of predicting the decisions of a neutral third party (e.g., a mediator*). Such a prediction system, if accurate enough, opens the door to automated ADR (‘AADR’). For negotiation or mediation, this could involve an AI model interrogating the logic of arguments or evidence and proposing middle ground or settlement. For expert determination and arbitration, this would require more extensive training of the AI model on particular sectors or areas of law so that predictions of the neutral third party’s decisions could be made.
There are several reasons why AADR could be a good thing, aside from the benefits arising from an increased uptake of ADR (for example, relieving the burden on the court system and saving parties time and costs by avoiding lengthy court litigation).
– Firstly, AADR is likely to be significantly quicker and potentially cheaper than ADR, especially when indirect costs such as management time and legal costs are included. This should enhance access to justice, allowing smaller businesses and individuals to obtain a resolution in circumstances where they might not otherwise have had the time or funds to pursue litigation or ADR.
– Secondly, an AI model could result in more unbiased and fairer outcomes. Unconscious bias, the quality of advocacy, and even the mood of the decision-maker (for example, whether a decision is made before or after lunch) may improperly impact upon decision making. Provided the AI model has been trained on high quality data and efforts are made to limit structural bias, it may be that a dispassionate AI can resolve disputes in a fairer way. The parties may also be more trusting of a dispassionate decision.
– Finally, the impersonal nature of AADR, without the need for face-to-face advocacy and conflict, may reduce tensions. Removing the emotional element from a dispute can propel parties to a settlement, preserving ongoing business relationships.
Unfortunately, it is not all upside for AADR. There are a number of well-rehearsed arguments against automation in general (for example, job losses and loss of skills) and concerns about the careless deployment of AI (for example, intellectual property infringement and privacy issues), which are very important but outside the scope of this article. Instead, here are a few hurdles specific to the introduction of AADR.
– There are a few human qualities which AI at present has difficulties with and which are important to dispute resolution. One is the ability to reason, which means that an AI model could struggle to determine whether an action is ‘reasonable’ (which is important in the world of commercial law) or to discern the credibility of factual evidence. Another capability which appears to be lacking is emotional intelligence, which means an AI model may miss the nuances and complexity of human interaction.
– Alongside this are some large ethical and moral questions, upon which there is little consensus. Even if an AI model is capable of employing morality when generating its output, should the AI model be morally neutral and blindly apply the law, or should it exhibit morality and, if so, whose morals? Is it possible for an AI model to be morally neutral? These are enormous ethical questions which ought to be carefully considered before an AI model is deployed.
– Next, parties must trust that a dispute has been dealt with in a just, impartial and fair way with due process. Therefore, the well-rehearsed criticism that generative AI is a black box will be particularly problematic. Without the ability to interrogate how a decision has been reached, the unsuccessful party will have limited ways of dispelling doubts about, or discovering, any bias or error (a particular problem in a world where AI models famously hallucinate and exhibit bias). This criticism is heightened where the output is legally binding (for example, with expert determination).
In light of all that, should we be automating justice at all? In the justice space, ADR is a prime candidate for automation because it is usually voluntary, so ethical concerns which accompany the coercive use of automated decision-making over an individual’s rights after not generally applicable. However, it seems that existing AI models struggle with exhibiting various human qualities which are generally required of ADR facilitators, and the opaque nature of existing technology means that due process and trust in the administration of justice may be undermined if deployed without care.
That said, there are circumstances where disputes do not require much (if any) moral decision-making or emotional intelligence, and the stakes may be such that parties are content to accept the output of an AI model which can be shown to accurately predict the decision of a neutral third party notwithstanding the inability to interrogate that decision. For example, repetitive, simple, low value, business-to-business commercial disputes which require a plain application of the law. With appropriate safeguards, AADR used in this space may increase access to justice by resolving disputes which otherwise might not be pursued due to costs or other constraints.
As the technology surrounding AI advances, the temptation to deploy automation to the administration of justice will grow. There are important practical and ethical questions to be considered before that can be done safely.
—
Author: David Bufton, Senior Knowledge Lawyer at Travers Smith LLP
[The information in this thought leadership article is intended to be of a general nature and is not legal advice.]
Note: * Cortés, “Artificial Intelligence in dispute resolution” 2024 C.T.L.R. 30(5), 119-127, 123.