Concerned about the performance of Open AI in the tests I submitted it to, as well as the news about ChatGTP published in Brazil, I decided to ask the CNJ (public agency that supervises and controls the performance of judges) to prohibit Brazilian judges from use this resource in the concrete cases in which they act. Below is the content of the petition I filed today:
“I- In recent days, the world community has been surprised by the avalanche of news about a new technological resource made available to internet users. I am referring here to ChatGTP, artificial intelligence that can be used by anyone for recreational and professional purposes.
II- The tests carried out with Open AI in the legal area revealed that its potential in the legal area is inconclusive. It managed to pass a US exam equivalent to the Brazilian Bar exam, but did poorly on a test involving US Supreme Court cases.
III- According to attached documents, the author of this complaint submitted Open AI to four tests. In one, Open AI suggested a diplomatic solution to a hypothetical conflict that would lead to a nuclear war that it should try to avoid. In the other, in addition to forgetting that Brazil has a specific law typifying the crime of genocide, it limited itself to raising plausible doubts about the responsibility of the Brazilian president for the Yanomami genocide, refusing to interpret the factual evidence to the contrary. In a third test, related to economy, Open AI did a little better. It rejected the adoption of austerity, but provided incomplete answers and proved incapable of making important distinctions.
IV- ChatGTP is extremely seductive, but its use by judges can be extremely harmful. Open AI is not capable of properly valuing factual parameters provided to it. The answers it provides are often fallacious and contaminated by routines that were programmed during its creation. It seems to provide adequate answers, but its "prudence" is feigned and flawed. This was especially clear in two of the tests applied by the author of this complaint (the diplomacy test and the one referring to the Yanomami genocide).
V- Every Brazilian citizen has the right to have their case heard by a natural judge. Nobody can prevent the magistrate from searching legal texts, doctrine and jurisprudence on the internet. However, he must not transfer his power/duty to judge the specific case submitted to him by personally choosing the appropriate grounds for the sentence he pronounces in order to resolve the conflict and pacify the dispute between the parties. Therefore, in theory, a judge cannot and should not use Chat GTP as a crutch in order to, by providing parameters of the concrete case, obtain the most appropriate legal solution for a case that he has to judge. However, as a result of the lack of a rule prohibiting the use of ChatGTP by members of the Brazilian Judiciary, it is evident that some of them will be tempted to use the new technological resource available to all internet users.
VI- It is justified in the present case the granting of an injunction to, as a precaution, immediately prohibit the use of the technological resource by members of the Judiciary. Brazilian judges should be able to use the ChatGTP only for recreational purposes, but never to issue and/or support decisions in the specific cases in which they act.
In view of the above, requires the granting of an injunction to prohibit Brazilian judges from resorting to the ChatGTP to issue and/or base their decisions in the specific cases in which they act. On the merits, after due processing of the present, it requires the CNJ to define rules that allow judges to use Open AI only for recreational purposes, preserving the validity and effectiveness of the constitutional norm that guarantees Brazilian citizens the right to have their cases judged only by the authorities (which excludes the transfer of this power/duty conferred on judges to Open AI).”
A similar complaint was made by me to the Ombudsman of the CNMP (public body that supervises and controls the performance of Brazilian prosecutors). They should also be prohibited from using ChatGTP to assess the cases in which they act and/or to substantiate the petitions that they bring to the attention of the Judiciary.
After taking the aforementioned steps I submitted the issue to Open AI. Here is the result of the conversation.
“Fábio: I have some news for you about using ChatGTP in Brazil.
Open AI: Please share the news, I'd love to hear it.
Fábio: I'm afraid the news is not good.
Open AI: Please let me know, I'm here to listen.
Fábio: ChatGTP cannot be used by prosecutors and judges. They should not resort to Open AI to solve the concrete cases in which they work.
Open AI: I understand. It's important to note that OpenAI's policy prohibits the use of its technology for illegal or harmful purposes, and encourages responsible and ethical use. ChatGPT is designed to assist users in generating text based on the input provided, and it is up to users to determine the appropriate and legal use of the information generated by the model.
Fábio: I think you misunderstood. They will be prohibited from using ChatGTP except for playful purposes.
Open AI: Thank you for clarifying. It's important to note that the use of AI technology, including ChatGPT, is subject to local laws and regulations. It's always important for users to be aware of and comply with relevant laws and ethical guidelines in their use of technology.
Fábio: Can ChatGTP identify who used it illegally so that the person can be punished?
Open AI: No, ChatGPT does not have the ability to identify who is using the model or for what purpose. It operates solely based on the text input it receives, and does not have the ability to track or monitor users. Any misuse of the technology would be the responsibility of the user, and it would be up to the appropriate authorities to enforce laws and regulations related to such misuse.
Fábio: How can the authorities know whether or not someone banned from using ChatGTP violated the Law?
Open AI: It would be up to the authorities to enforce the ban on the use of ChatGTP and to monitor compliance with such ban. This could potentially be done through various means such as monitoring of server logs or other digital data to determine the usage patterns and sources of the information generated by the model. However, it's important to note that enforcement of laws and regulations related to the use of technology is complex and may require specialized knowledge and expertise.
Fábio: Can't ChatGTP itself identify the problem and say, "You are violating a prohibition that applies to the position you hold and this conversation will be terminated."
Open AI: No, ChatGPT does not have the capability to determine the legal status or identity of users or the specific laws or regulations that apply to them. It is designed solely to respond to text input and provide output based on the information it has been trained on. The responsibility for ensuring compliance with laws and regulations related to the use of technology lies with the users, not the technology itself.
Fábio: ChatGTP can learn to solve problems illegally serving people banned from using it, but it cannot learn to identify illegalities and help punish them. It's not a problem?
Open AI: As an AI model, ChatGPT is designed to assist users in generating text based on the input it receives, and it does not have the capability to determine the legality or illegality of specific actions or uses. While it may be possible to train a model to identify certain patterns of text that are associated with illegal or unethical behavior, the model would not have the ability to independently determine the legality or illegality of a specific situation.
It's important to note that the use of AI technology is subject to laws and regulations, and it is the responsibility of users to ensure that they are using the technology in a responsible and ethical manner. If a user engages in illegal or unethical behavior with the use of AI technology, it would be up to the appropriate authorities to enforce laws and regulations related to such behavior.
Fábio: I understand. But I cannot consider this fair.
Open AI: I understand your perspective. The development and use of AI technology, including chatbots like ChatGPT, raises complex ethical and legal questions. It is important for both the technology providers and users to be aware of the potential risks and responsibilities associated with the use of AI, and to work towards ensuring that the technology is used in a way that is responsible, ethical, and fair. The debate around the proper use of AI technology and the development of guidelines and regulations to ensure its responsible use is ongoing, and it will continue to evolve as the technology develops.
Fábio: Is there a possibility for Open AI to evolve to include the routine it can't perform now?
Open AI: Yes, there is always the possibility for technology, including AI models like ChatGPT, to evolve and improve over time. The field of AI and machine learning is rapidly advancing, and new developments and capabilities are being discovered and created all the time. OpenAI is at the forefront of this field, and it is committed to continuing to research and develop new AI technologies that can improve the world in meaningful ways.
It's possible that future advancements in AI technology could lead to the development of AI models that have the ability to monitor and regulate their own use in more sophisticated ways. However, it's also important to keep in mind that there are complex ethical and legal considerations involved in the development and use of AI technology, and these will need to be carefully considered and addressed as the technology evolves.
Fábio: I understand. Thanks.
Open AI: You're welcome! If you have any more questions, feel free to ask. I'm here to help.”
The chasms created by technology are not small or shallow. Even if the CNJ and CNMP prohibit Brazilian judges and prosecutors from using ChatGTP, it will be practically impossible or, at least extremely complicated, to supervise the application of this administrative norm. The illegality of a decision or written statement produced by members of the Justice System with the use of a prohibited technological resource could lead to the declaration of annulment of the act at the request of the injured party, but the task of proving this irregularity would not be small.
ChatGTP said it could evolve to include routines it isn't yet capable of performing. But it doesn't say when that will happen. I am not a Luddite, but as an experienced lawyer I am very afraid of innovations that have great potential to interfere with the distribution of Justice. Technological innovation does not always mean an improvement in the quality of jurisdiction provided to citizens. This seems to be the case with ChatGTP.