What Might Artificial Intelligence Mean For Alternative Dispute Resolution?
by James South
Artificial Intelligence (AI), the notion that computerised systems can replace human thought processes and interactions, continues to gain traction in all areas of life including the legal profession and in particular in the field of dispute resolution.
Lex Machina, a Data-mining computer programme created at Stanford University in 2006, has been used to look for patterns to help with predicting the progress of cases in the US. In November 2017 there were news headlines about ‘Case Cruncher Alpha’, a project at Cambridge University, where an AI system predicted the outcome of 775 financial ombudsman cases with 86.6% accuracy. A panel of 100 experienced lawyers assembled to perform the same task achieved 66.3%. Today we now have the emergence of ‘Smart Contracts’ – agreements stored across computers (a process known as blockchain) and which are defined by computer code rather than traditional written clauses. These Smart Contracts are designed to have automatic triggers (or intelligence) and be free from manipulation caused by subsequent man-made variants.
The end result of all of this change is a prediction, based on ‘Moore’s Law’ (a theory proposed in 1965 by Gordon Moore the founder of Intel, which has proven to be correct) that the power of chip processing would double every year. It is argued that by around 2045 a point of super artificial intelligence will have been reached (a ‘technological singularity’) creating almost limitless capacity for tasks such as problem solving.
For those of us working in the Alternative Dispute Resolution field, such predictions may seem more unsettling, even frightening, than reassuring or exciting. The question is what is AI likely to do in a setting which has been so focussed on combining subtle concepts such as legal rights and a sense of fairness (adjudication) or human interaction and coaching (mediation)? Where do these developments leave us and what will their impact be?
A Human Factor?
In an adjudicative capacity, could AI have the potential to process claims faster and even make a decision about cases? Would claimants tolerate their cases being resolved by computers? There is a school of thought that people are less likely to trust a decision made by a computer (even if it is based on clear logic) and people may feel it is easier to complain that a computer has got something wrong than an individual. We can say that a computer must be malfunctioning because we’ve all had a computer freeze on us at some point.
However, the countervailing argument is that, as AI increasingly becomes part of our day-to-day lives – to the point that we allow it to drive us and our families around in self-driving cars – there will come a time where we are completely comfortable in letting the algorithm adjudicate our case for us.
In her article “Elementary My Dear Watson!” (Kluwer, August 2017) the Brazilian Mediator, Andrea Maia, is writing about the computer named for the IBM founder Thomas J Watson to illustrate how fast technology is moving and how developments may find applications in the legal field where infallibility is a prized ability.
Maia noted how, “In 2011, a computer gained fame as a celebrity. Its name was Watson of IBM. This was right after taking part in a very popular Q&A show on American TV (Jeopardy), in which Watson, with its ability to understand our language using only information recorded in its memory, and working offline, managed to successfully overcome two of the best Jeopardy players from past shows. Watson used Artificial Intelligence (AI) for this challenge.”
Mediation
In the mediation sphere, the use of AI used to be seen as even more remote. Mediation, which in many ways is about bringing the human element back into disputes and litigation, seemed to be an area where it would not be possible to use AI.
However, the ‘birth’ of Sophia in 2015 may be a harbinger of things to come. Developed by Hanson Robotics, Sophia (with a life-like human head) is designed specifically to interact socially with people and therefore build rapport. Various videos of Sophia interacting with people are freely available online – so you can make up your own mind.
So, could artificial intelligence be present at the mediation table and could it even be the mediator? Surely AI will not reach such a point where a robot might represent a client or even chair a mediation session? What once sounded like pure science fiction is now, because of developments such as Sophia seeming remote but not impossible. There are today a myriad of programmes that can recognise and respond to human emotion and whilst these may not yet be the same as human interaction they are improving all the time.
The Coming Change
Whilst the idea of machines answering questions does not seem unusual, the idea of them doing this whilst they build a relationship with us does seem somewhat strange – especially for those of us that work in ADR. However, we are not the only ones writing on this subject. This article has been heavily influenced by our friend Monique van de Griendt, from Dialogue BV in the Netherlands, who at the UIA Mediation Centre World Congress in July of this year conjectured about what many of these developments mean for the future.
So what does AI mean for ADR? There are a good few possibilities – all of which could be true. AI could be a tool for the mediator or adjudicator to embrace, it could be another stage in a bigger resolution process change or it just might be our competition. So are such changes positive or negative? It is hard to know without a crystal ball.
One thing is certain – we need to be thinking about this topic as things develop in order to determine what our own roles should be in the future.