A brave new Brazilian judiciary
When it is no longer possible to separate fiction and reality, literature becomes indispensable.
It has been 15 years since “Legal ChatGPT” became an indispensable tool for managing procedural flows, classifying and prioritizing process data, making automated interlocutory decisions and, of course, preparing suggestions for sentences and rulings. About 5 years ago, “Prompt Sentence” was incorporated into the system. Since then, the natural judge of the case can obtain, in the files of the action to be judged, the vote of Artificial Intelligence in order to review it before the sentence is digitally signed and attached to the process.
Everything is very intuitive and functional. But initial resistance from lawyers to the new tool was fierce. And the legal question regarding the validity or otherwise of automated or partially automated sentences was overcome when the Supreme Court interpreted the Constitution saying that the expression “competent authority” contained in art. 5th, LIII, refers to the court that has territorial or functional competence to judge the process and not to the human nature of whoever made or suggested the decision.
The issue was taken by a group of Luddite lawyers to the UN Court. But there, American Big Tech lobbyists who supply products to the justice systems of most civilized countries managed to paralyze the progress of the process. Haste is not just the enemy of justice. It is the mortal enemy of business as usual for companies that create and license artificial intelligence.
Things were going well until at the beginning of 07/24/2039, the Director of the Registry Office of the 1st Civil Court of the District of São Paulo, reported a problem to the judge. He had accessed “Prompt Sentence” with the judge’s password, but the suggested decision for a case was absurd. He tried to register a review of the decision before reporting the incident to the judge for evaluation and eventual signing of the sentence. However, the system crashed with an error warning.
Angry at having been interrupted while watching a customized porn video produced to order by the video-generating AI on YouTube, the judge accessed the case in which he was to be sentenced. Astonished, he confirmed the problem mentioned by his subordinate. The “Prompt Sentence” had crashed. As it was not possible to attach a sentence to the case without necessarily using this system functionality, the judge immediately called the IT department of the Court of Justice. He had goals to achieve and could be punished if the sentence was not immediately registered in the process.
- For information about the system’s features, dial 1. To speak to the technician on duty, dial 2.”
The judge dialed 2.
- To speak to the technician on duty, dial 2.
The judge again dialed 2.
- To speak to the technician on duty, dial 2.
Shit. The Court of Justice's automated attendance system is also having problems. He took out his working smartphone and sent a message to the person responsible for the Court's IT sector narrating what had happened. Minutes later his phone rang.
- Hello..., this is from the Court's IT sector. I just received your message.
- So you can tell me what the hell is going on. I need to make a decision and I simply cannot access the “Prompt Sentence”. It completely crashed...
- Ok. We are already aware of this problem. Several of your colleagues made similar complaints late yesterday afternoon. We are already working on solving the problem. The system will be restored soon. No functional impairment will occur for judges who are unable to meet their goals today.
The next day, as soon as he arrived at the Forum, the judge opened the case that should be sentenced and accessed the “Prompt Sentence”. The case was relatively simple, but the IA provided a legally inadequate solution, assuming that the Civil Code should not be used to resolve a neighbors' dispute over a bee infestation. The AI was winning the case for the owner of the house where the hive was located, but in this specific case the Civil Code was on the side of the neighbor tormented and harmed by bees of a common species not protected by legislation.
The judge reviewed the sentence and tried to register it in the system using his password. The “Prompt Sentence” refused to attach the document to the case file offering a new version of the decision. The AI continued to attribute a win to the defendant, but this time it deliberately modified the content of a witness's statement stating that the bees had attacked the neighbor because he had thrown a stone at the hive.
This time, the judge was even more confused. He had taken the statements and remembered the case very well, but he did not remember the witness saying that. When consulting the video conference he noticed something even stranger. There was an obvious discrepancy between what was said in the video and its transcription by the AI that managed the data for use by “Prompt sentence”.
The witness had said that the plaintiff had complained about the bees after being attacked by them. But the transcript of this fragment of the testimony said that he had thrown a stone at the hive before the bees attacked.
This was new. Generally, data management for the “Prompt Sentence” was done in a very rigorous manner, precisely to avoid the delivery of inappropriate sentences or those containing hallucinations. As he was not an IT technician, the judge shrugged his shoulders and again rejected the suggested decision formulated by the AI and tried to record the version of the sentence he had handed down taking into account what was stated in the testimonies. Once again, the IA refused to register the sentence by providing a new draft. But this time the AI went further, in addition to using the inadequate transcription of the testimony, it simply stated that the Civil Code said something that seemed very different from what was contained in the text promulgated by Brazilian legislators.
Stunned, the judge opened the version of the Civil Code in the Court of Justice system and noticed that the text in this file was identical to the one transcribed in the sentence suggestion made by the AI. But both were obviously different from the text of the printed version of the Civil Code that he consulted. What should he do? Contact the Court's IT department or digitally sign the sentence suggested by the AI?
If he complained again, it could have extremely serious consequences. The “Prompt Sentence” would have to be thrown in the trash can because it was manipulating procedural data to provide decisions contaminated by some type of bias. If the sentence were registered, the aggrieved party would appeal alleging the distinction between the Civil Code and the version of its text that appears in the sentence. The distinction between the content of the videoconference and the transcript of the testimony would also be subject to challenge. The implications of the appeal ruling in this case could be colossal and even unpleasant.
When the case was reviewed, the Court of Justice could bureaucratically maintain the decision, assuming that the valid and effective Law is the one chosen by the AI and not the one enacted by the National Congress. But if the decision were reformed with the recognition of the very serious flaw of the “Prompt Sentence”, the result would not just be the dismantling of the system with an increase in work for judges. In fact, whoever subscribed to the hallucination by transforming it into a sentence could suffer some type of reprisal. What to do?
Better not to decide anything. The judge transformed the trial into a diligence and, to circumvent the “Prompt Sentence”, scheduled a conciliation hearing. It will be held soon. Before the hearing takes place, however, that poor judge will have to be very careful. After all, just as AI became capable of manipulating data from the Court system to impose the only decision it considers fair in a case, by virtue of being disobeyed, the vengeful AI can invade and hack his self-driving Tesla to provoke a fatal accident.