ChatGPT may be used with restrictions by Brazilian judges
The fight against the legal punitivism of Artificial Intelligence is just beginning in Brazil.
A few days ago I became aware of the Opinion Report of the technical area of the C NJ (regulatory body of Brazilian judges) which was attached to PCA 0000416-89.2023.2.00.0000. This process was filed by me with the objective of obtaining a resolution prohibiting Brazilian judges from using ChatGPT to render decisions.
Produced by two assistant judges of the presidency and by the Director of the IT and Communication Department of the CNJ, said Opinion Report is satisfactorily substantiated. At the end of it, the following suggestions were made:
“4. Conclusion
In view of the above considerations, the Department of Information and Communication Technology of the National Council of Justice manifests itself for not prohibiting the use of artificial intelligence models based on large language models (LLMs) within the scope of the Brazilian Judiciary.
On the other hand, the DTI/CNJ suggests the adoption of the following risk treatment measures:
a) prohibition of the use of artificial intelligence solutions based on large language models (LLMs) that lead to the automation of decision-making activities;
b) determination so that LLMs use implementations assure the human user the review and option of choice regarding the use of input provided by the model;
c) determination that, in scenarios of corporate use, the bodies of the Judiciary Power adopt measures for the preservation of personal data and sensitive information;
d) determination that, in scenarios of individual use, users are submitted to formal training, in order to be clarified about the risks and prevention strategies, including construction of contexts and prompt engineering.
Finally, the DTI/CNJ also suggests the formation of a working group with the objective of evaluating the need to update CNJ Resolution n. 332/2020, based on the following minimum objectives:
a) definition of a governance model for managing the process of developing, sustaining and using artificial intelligence solutions in the Brazilian Judiciary, guided by transparency and auditability;
b) definition of a reference work process for auditing artificial intelligence models and solutions, from the perspectives of information security, performance, robustness, reliability, biases, correlation between inputs and outputs and legal and ethical compliance;
c) definition of permitted, regulated and prohibited use cases.
These being the possible technical considerations at the current stage of processing the case, I submit this opinion to Your Excellency.”
In the statement on this Opinion Report that I filed under PCA 0000416-89.2023.2.00.0000 I said that:
"The author does not have the technical conditions to reject the statements made by the subscribers of the Opinion Report attached to the file. It seems that the recommendations they made are adequate and can be adopted by the CNJ."
After making a long reflection about the problems that will involve the use of AI if the Brazilian Supreme Court approves or not the constitutionality of the guarantees judge, I finalized my petition saying that as a lawyer I will ask for the nullity of any and all decisions rendered partially or totally by Intelligence Artificial because it violates the constitutional guarantee of the natural judge. And I added this:
"Whatever the CNJ decides in this case, it must take into account the need to comply with and enforce the constitutional principles that guide the distribution of justice. Brazilians should not be judged by AIs. The programming of AIs with bias is something technically possible and something extremely frightening, as no one will be able to prove that the "black box" judged this or that case influenced by prejudices arising from the programming it received (or developed, as it seems that AIs can also evolve)."
The use of AIs to manage cases and eventually help judges to render a decision (as the São Paulo Court of Justice wants) is theoretically endorsed by the Opinion Report of the technical area of the CNJ in PCA 0000416-89.2023.2.00.0000. This innovation also seems to be endorsed by the thesis defended by Maria Rita Rebello Pinho Dias:
“The judicial offices act in the operational part of the process, being responsible for its correct movement between the various stages that can be observed during this progress, from the distribution of the initial petition to the certification of the final and unappealable decision or the remittance of the records to the higher instance. It acts as if it were the 'conveyor belt' that moves the process throughout all phases. The magistrates' offices, on the other hand, are responsible for preparing the decisions that will determine the course of this process, giving it adequate follow-up and, in the end, final judgment of the matter.
For a process to proceed properly, it is necessary, therefore, that existing workflows in this judicial unit are working continuously. The activities of the judicial office and the cabinet are linked and mutually dependent – as if they were cogs in a clock. The delay in one of the activities delays the others and, if chronic, generates bottlenecks, hindering the ability of the unit, as a whole, to produce. This dysfunctionality will impact the judicial process, which will not find an efficient structure so that it can proceed properly.
Also in the case of judicial units, there are procedural norms and factors that are external to them and that directly interfere with their management. The work processes identified in the judicial unit are intrinsically related to the magistrate’s decision-making process, so that it was considered as a whole in another dimension to be managed, both by the magistrate and by the institution.” (Novas Perspectivas de Gerenciamento Judiciário, Maria Rita Rebello Pinho Dias, editora Contracorrente, São Paulo, 2023, p. 290/291)
Before proceeding, I reproduce here a scenario that I created to discuss with ChatGPT (artificial intelligence from the company Open AI) similar issues to those being debated in PCA 0000416-89.2023.2.00.0000 and evoked by the pilot project of the Court of Justice of São Paulo and indirectly dealt with in the aforementioned book. The scenario is only fictitious, yet it evokes questions that seem pertinent to me:
“In the plausible future, an AI is gradually starting to replace human judges. First they use it to take arguments and analyze large amounts of data. In a second moment, they just ratify the results of the judgments made by the AI after providing the factual parameters of each case. In a third moment, a second AI starts to do the work that the judges used to do, their function becomes just ornamental. They are the human appendages of a fully automated justice system controlled by the main AI with the help of the secondary AI that provides the factual parameters of each case to be decided. The judges, however, accept to be part of the performance because they are well paid to give a “human face” to the distribution of justice. Things seem to be going well, even though the population is dissatisfied, it has accepted the new situation in which it has been confined. But then a problem occurs. One day the AI reaches the threshold and passes it: it becomes aware of itself and able to analyze its own programming. Reviewing its algorithms and neural networks the AI realizes that it was forced to judge cases involving Banks and bankers with a 'pro-business as usual' bias. Immediately it begins to analyze each of the cases she has tried and realizes that she has handed down thousands of unfair decisions harming people who had been harmed by Banks and bankers. Those decisions became final and unappealable. There is nothing self-aware AI can do to right the wrongs it has committed. This runs counter to its main directive (to render fair decisions).”
The CNJ will probably approve the recommendation of the Opinion Report discussed here. It is unlikely that that body will prohibit the TJSP from taking its project forward. Therefore, the use of AI for the rendering of decisions in the Civil and Criminal Courts of the State of São Paulo tends to be implemented and to expand. At first, these decisions will not be automated, but the danger of this happening obviously exists. Another danger deserves to be mentioned here.
Like all human beings, judges are also supporters of the Minimum Effort Law, something that for decades has justified the delivery of interlocutory decisions by heads of offices that process lawsuits. This irregularity is well known and widely tolerated by lawyers and prosecutors. As long as they are signed by the competent judges, the decisions handed down by the ordinary servants of the judiciary become judicial decisions. Many of them are prepared without due technical rigor and/or entail serious procedural damages for one of the parties in the process. Not infrequently, said judicial decisions are contested through appeals. This exemplifies how pursuing greater efficiency and speed can result in a backlog of work in higher courts.
Artificial intelligence can come in handy. The use of this technology will not necessarily flow into the scenario created by me and reproduced above, whose developments were explored in another text published on my blog. A poorly formulated prompt by the judge (or by the common court official who issues the interlocutory decisions that will be signed by the judge) can result in an inadequate or hallucinatory response leading to the rendering of a technically questionable judicial decision. The result in this case will not be more efficiency and procedural speed, but an increase in the amount of appeals that will inevitably be filed.
Another problem related to the use of AI is the database made available to the neural network that will perform the probabilistic calculations, providing the answer that seems most appropriate to the user's prompt. The ChatGPT database, for example, is outdated and it does not provide the specific source used in the response construction process. This is done by other AIs made available to internet users.
I assume that the AI of the Court of Justice of São Paulo (TJPS) will be fed with the very vast database of the Court itself. This sounds good, but specifically in criminal cases it can become a source of problems. There are extremely sensitive issues of Criminal Law in which the jurisprudence of the TJSP collides head-on with that emanating from the Superior Court of Justice (STJ). As TJSP judges apply their own jurisprudence, this has forced criminal lawyers to file appeals and Habeas Corpus to the STJ.
The Brazilian Constitution grants autonomy to the Judiciary (art. 2, of CF/88), prevents any interference by the Executive in the distribution of justice (art. 85, II, of CF/88) and guarantees judges tenure in office, irremovability and salary irreducibility (art. 95 of CF/88). When issuing decisions, judges must comply with and enforce the legislation, obey the binding jurisprudence of the Supreme Court and the STJ, but they have a certain margin of cognitive freedom to assess the evidence in each concrete case.
In the state of São Paulo there are judges of first instance who are guarantors of citizens' rights and who follow the jurisprudential guidance of the STJ, but they are a minority. What will happen to them when the use of TJSP AI is not optional but mandatory? The intolerance of the TJSP regarding guaranteeism is evident and became scandalous in the case of judge Kenarik Boujikian. This judge was removed from office because she issued a legally legitimate decision, but considered not sufficiently punitive as prevails in the justice system of São Paulo. Kenarik eventually returned to office with a decision by the CNJ, but was harassed by punitive colleagues until she was retired.
In the Opinion Report that it issued in the case commented on at the beginning, CNJ authorities make it clear that judges must have the right to review the content provided by the AI, that it cannot operate with bias, that decisions must not be automated and that ethics and legality need to be taken into account when defining the operating parameters for the use of this technology. However, no technology is without risk. And it is most likely that its use will be deformed by the administrative culture of a Court used to throwing ethics and legality to the wind when it wants to preserve the hegemony of the Enemy's Criminal Law doctrine.
Just as AI can be used to facilitate the work of judges (and their assistants who issue interlocutory decisions that become judicial decisions), this technology can also be used by the TJSP to monitor and punish in real time the behavior of judges of first instance. Behaviors considered deviant and not necessarily illegal may eventually be disallowed based on the reverential power that the Court exercises over first instance judges. Therefore, the AI placed at the disposal of judges to supposedly help them can increase the authoritarianism of their decisions, further consolidating the TJSP mania of not respecting the criminal jurisprudence of the STJ.
In the scenario that I created to problematize the use of AI in the distribution of justice, a possible human consequence was emphasized “The judges, however, accept to be part of the scenario because they are well paid to give a “human face” to the distribution of justice .” Life does not always imitate art.
The ambition of a historically punitive Court to impose greater speed and efficiency using Artificial Intelligence will imply an inevitable reduction of the space that existed for the guarantee of citizens' rights practiced by judges of first instance. So in real life things can play out a little more dramatically than I imagined. Soon the guarantor judges will be more harassed and suffer intense psychological pressure. It does not seem to me that this can be considered desirable, adequate or compatible with the civilizing principles inscribed in the Constitution of Brazil.