Judiciary Control Agency (CNJ) tragically decided that ChatGPT can be used by judges
When Cesar crossed the Rubicon River he set in motion the wheels that would ultimately lead to his death by stabbing.
Two literary works had a profound impact on my adolescence. One of them was Antigone, by Sophocles. The other was Isaac Asimov's Bicentennial Man. I remembered both when I learned about the bureaucratic repercussions of the decision in case 0000416-89.2023.2.00.0000, recently judged by the CNJ (look1, look 2, look 3).
Antigone and the NDR-113/Andrew robot have one thing in common. Both challenge the world they live in to obtain decisions in which humanity prevails. Sophocles' character fails to comply with Creon's cruel decree that condemned her brother to remain unburied. Antigone is convinced that all human beings must be buried. NDR-113/Andrew tries in vain to have his humanity recognized by Justice. At the end of Isaac Asimov's work, the robot gets what it wants. Sophocles' character is sentenced to death, NDR-113 / Andrew is only considered human the moment he dies.
In Isaac Asimov's fiction, a robot with genuinely human qualities is judged by a Court made up of human beings. At first they refuse NDR-113/Andrew's request because he is virtually immortal and immortality would distinguish him from our species. The advancement of ChatGPT into the Brazilian Justice System guaranteed by the decision handed down in case 0000416-89.2023.2.00.0000 creates an opposite situation. From now on, human beings will be judged by robots.
In its ruling, the CNJ demands that decisions not be automated and warns that judges must review everything that robots do. This warning will be useless. Legal professionals know that judges do not personally make all judicial decisions.
In the Courts, they sign decisions prepared by their assistants (generally first instance judges). In the first instance Courts, the Director of the Registry Office generally suggests interlocutory decisions and even some sentences. Just as they trivialized the act of deciding in the past, judges will transfer a considerable part of their work to robots. Many will simply sign whatever “legal ChatGPT” spews out without bothering to check whether or not the result provided contains any hallucinations.
I don't really like being pessimistic. But it seems clear to me that in a few years the higher courts will be forced to review first instance hallucinations. And these hallucinations could give rise to new hallucinations, because the Judges will also use a “legal ChatGPT” that will consult databases contaminated by hallucinations that were previously transformed into judicial decisions. Isolating the database can avoid this problem, but it will not prevent inadequate results from being provided and eventually turned into legal decisions.
NDR-113/Andrew only gets a fair decision when he renounces his immortality. Antigone's death ends up turning against Creon, because as a result of her condemnation he also tragically loses his son and wife. What we will soon lose is control over the bias/impartiality of judicial decisions.
Brazilian Lawyers, poor things, will never be able to demonstrate that the decision that harmed their clients was contaminated by some type of bias. Big Techs that provide text-generating AIs to the Judiciary can preserve their industrial secrets by invoking their patent rights. In the past it was possible to allege and prove the judge's impediment/suspicion. This institute will fall into disuse because the impartiality of robots can hardly be demonstrated. The submission of a public activity (the distribution of justice) to an institute of private law (patent law) does not seem very appropriate or desirable to me.
IT engineers and greedy businessmen have managed to invade humanity's last refuge. Henceforth, the distribution of justice will no longer be a human activity created by human beings to solve human problems. What it will be no one yet knows.
Antigone's tragic death continues to reverberate throughout the modern world. But the questions raised by Isaac Asimov in The Bicentennial Man are becoming outdated. Human judges judged that robot judges will be able to judge human cases. Reality has surpassed fiction, but the result will not necessarily be a tragedy according to the CNJ.
"I warned!" This will be the only thing that the author of case 0000416-89.2023.2.00.0000 will be able to say when the problems caused by the "legal ChatGPT" begin to flood the processes and force lawyers and judges to reflect on the technological adventure in which the Judiciary has plunged to make a profit for American Big Techs. If the distribution of justice in Brazil is subjected, deformed and corrupted by robots, who will be held responsible? Nobody. In my country, public agents almost always do whatever they want without paying the bill for the damage they cause.
In one day, the CNJ authorizes Brazilian judges to use ChatGPT. The other day the press reported that AIs can be hacked. The State's responsibility if citizens' legal actions (or decisions made in them) are hacked is a legal possibility (art. 37, §6, of the Brazilian Constitution), but the question is who will be the public agent who will respond regressively for the damages caused to the State itself by the adoption of insecure technology.