By akademiotoelektronik, 19/03/2023

Yannick Meneceur: "AI in justice cannot have an answer to everything"

A bit like in a film by Lautner and Audiard, it cleared dry to the Council of State.The day before the Christmas Eve evening 2021-2022, the high administrative jurisdiction rejected in a completely categorical way the various appeals filed against a decree of March 27, 2020 creating “datajust” for an experimental phase of two years.

Let us recall that this algorithm was presented as one of the very first applications of artificial intelligence by the Ministry of Justice in order to create a benchmark for compensation for compensation for bodily injury, by exploiting the decisions rendered by the coursesof judicial and administrative calls between 2017 and 2019.Certainly moved by the presence of all the ingredients of this famous "predictive justice", the various applicants, including the association La Quadrature du Net, raised various means to defeat the test phase of this algorithm.In a non -exhaustive manner, Datajust was notably accused of replacing the law to set compensation, of being contrary to the principles of individualization of decisions and full compensation for damages, as well as the principles of minimization andData accuracy. Aucun de ces arguments n’a convaincu les juges du Palais-Royal, qui ont précisé au détour d’un considérant que le logiciel tendrait au contraire “à assurer un accès plus facile à la jurisprudence sur l’indemnisation des préjudices corporels afin de garantir l’accessibilité et la prévisibilité du droit” (12e considérant).

À lire aussi :Exclusif : le ministère de la Justice renonce à son algorithme DataJust

It must be said that the idea of a repository in the matter is not really new because, from the point of view of debtors of compensation claims - whether natural persons or private payable organizations orpublic -, individualization of decisions is considered to be a hazard.The White Paper of the French Association of Insurance of 2008 not having led the legislator to engrave in the marble of a law a national scale, compensation scales, such as the “Dintilhac” nomenclature or the “Mornet” frame of reference, have been established by the Court of Cassation with statistical methods and today serve guides to actors in the compensation for bodily injury in order to ensure better territorial harmonization of the judicial response ... without all the time achieve itsatisfactory.

Mathematicians have shown that the expansion of a database inevitably led to the appearance of "fallacious correlations", that is to say links between data resulting from chance and not real causal links.

The so -called automatic learning algorithms (Machine Learning) have therefore reunited this ambition, making it possible to create a new generation of referentials inferred by the massive treatment of a considerable quantity of court decisions.The Ministry of Justice could not leave the initiative in the field in the private sector alone, whose legal tech already offer a commercial offer mainly intended for lawyers and legal departments.It is in this context that the Directorate of Civil Affairs and the Seal brought, with the support of entrepreneurs of general interest ", the experimentation of a system for victims, magistrates, lawyers, insurers and compensation funds.Datajust would therefore rather be a good idea a priori, by providing the greatest number of the profits from the latest technologies.But, as usual, the devil hides in the details.

Yannick Meneceur : “L’IA dans la justice ne peut pas avoir réponse à tout”

There is indeed a lot to say about the weaknesses of the "juristry" projects, to which Datajust does not escape, in particular with regard to the accuracy of the information produced.Unfortunately, a certain number of erroneous, intuitive and tenacious representations, still structure the debates at high level in the matter, sometimes in contempt for realities however well documented.For example, it is often heard that a large number of decisions is necessary for the reliability of this type of algorithm and that open data is essential to complete an objective of accuracy.Mathematicians like Cristian S.Calude and Giuseppe Longo have however demonstrated that the widening of a database inevitably led to the appearance of “fallacious correlations”, that is to say links between data resulting from chance and not real links ofcausality.This is how we can quite seriously establish a statistical link between the number of divorces and the consumption of margarine in the state of Maine in the United States.If this correlation ready to smile, let's keep in mind that those of the models built with automatic learning (Deep Learning) are not easy to flush out in the thousands, even millions, of parameters constituting real "black boxes".

These projects of “juisy” also come up against belief that a large generalization of automatic learning, and deep learning (Deep Learning), is possible following success as for the recognition of images orboard games.Now if it is easy for a machine to get out of it in a closed environment, with simple and constant rules such as Go's game, it is very much in open environments, filled with ambiguities, not eventspredictable and demanding of contextualization.All that an artificial intelligence does not know how to do today, in particular in the face of the "open texture" of legal interpretation, where two valid reasoning can lead to opposite decisions.

Prophéties autoréalisatrices

It is therefore to be feared that these various projects of “jugglery” will be doomed, in reality, to produce the illusion of knowledge whose only power will be to create self -registration prophecies under a technological varnish potentially aggravating the inequalities betweenpeople.This sad observation comes from the United States, where the association of journalists Propublica revealed how the “compass” application, placed on the office of judges to decide on placement in pre-trial detention or quantum of a sentence, attributed scoresof higher risk of recurrence to African-American individuals, without having been programmed of course in this sense.The study of Propublica has shown that criteria, such as the place of residence, had an indirect influence on the product score.And it is not a question of a specific compass problem, but a problem linked to any statistical processing requiring particular attention to the choice of data collected and the calibration of their processing to try to minimize the effects ofedge.But who is to decide on this calibration?A private operator?A government operator?To the judges themselves?And is an ideal calibration only possible, even desirable?

Ultimately, it would be tempting to reassure oneself by setting up guarantees so that humans keep the hand in all circumstances.But that is without counting on cognitive biases, such as automation or anchoring biases.The first describes the human propensity to favor the automatic suggestions of decision -making systems.This is how we find ourselves in a street that changed the direction of traffic by following the advice of his GPS.The second designates the difficulty of departing from a first information, even fragmented, especially when it comes to assessing a quantified situation.This is how during sales period, we are taken to the impression of having a good deal in our hands if the gap between the barred price and the displayed price is important.With the combination of these two biases, it is understood without difficulty that these algorithms are likely to have a very specific authority which imposes an increased requirement for them.

Is justice therefore condemned to definitively dismiss the automatic learning of its courts?Maybe not, if we are interested in less spectacular applications, but also less hazardous.Thus, the optimization of new legal research engines, such as the brand new Judilibre in the Court of Cassation, owes a lot to these new technologies.Without trying to predict the outcome of a trial but to analyze motivations, the use of natural language treatment algorithms would likely identify and categorize recurring arguments in decisions.Here is a “rear view mirror” on judicial practices that deserves study.

Rather, therefore, than deploying a real strategy of failure by seeking to generalize artificial intelligence at all costs to realize the technical inability to do so, a scrupulous, neutral examination of commercial interests, bymultidisciplinary teams, of different use cases would make it possible to generate real avenues of action.A beautiful mission including the brand new Institute for Studies and Research on Law and Justice (Ierdj) could take hold.

*Yannick Meneceur is magistrate on availability, author of "Artificial Intelligence in trials", Bruylant, 2020.He notably led the work of the Council of Europe on the legal supervision of artificial intelligence.

Tags: