One of the most striking aspects of texts that refer to Artificial Intelligence or recommend its use is the obsessive concern with “promoting trust in the use of AIs”. This essential issue has even been the subject of research (see here). Below I transcribe some excerpts from the work of Brazilian experts Eduardo Paranhos and Thomas Côrte Real:
“Reasonable and appropriate transparency measures, such as explainable AI techniques, allow users to understand how AI systems reach decisions, promoting trust and acceptance of AI technologies.” (Comentários ao EU AI Act – Uma abordagem prática e teórica do Artificial Intelligence Act da União Europeia, Rony Vaizof – Andriei Gutierrez – Gustavo Godinho – Alexandra Kastris, Thomson Reuters-RT, São Paulo, 2024, p. 177)
“… AI governance is essential to promote trust between parties (developers, distributors, users and users), a crucial condition for its broad adoption and acceptance of the technology in its entirety.” (Comentários ao EU AI Act – Uma abordagem prática e teórica do Artificial Intelligence Act da União Europeia, Rony Vaizof – Andriei Gutierrez – Gustavo Godinho – Alexandra Kastris, Thomson Reuters-RT, São Paulo, 2024, p. 178)
“Indeed, the benefits of effective AI governance go far beyond risk mitigation. Equally important, governance promotes public trust, a critical component for the increasingly widespread adoption of AI.” (Comentários ao EU AI Act – Uma abordagem prática e teórica do Artificial Intelligence Act da União Europeia, Rony Vaizof – Andriei Gutierrez – Gustavo Godinho – Alexandra Kastris, Thomson Reuters-RT, São Paulo, 2024, p. 179)
“Effective and appropriate governance that encompasses oversight mechanisms while fostering innovation and trust is crucial to ensuring that the development and use of technology occurs in an ethical, safe and legal manner.” (Comentários ao EU AI Act – Uma abordagem prática e teórica do Artificial Intelligence Act da União Europeia, Rony Vaizof – Andriei Gutierrez – Gustavo Godinho – Alexandra Kastris, Thomson Reuters-RT, São Paulo, 2024, 2024, p. 179)
“Continuous human oversight is also a practice that should be adopted for certain technologies, as it allows automated decisions to be reviewed and adjusted, avoiding unforeseen and/or harmful consequences. A structured and proactive approach to governance mitigates risks, promotes trust in technology and maximizes the benefits of AI for society.” (Comentários ao EU AI Act – Uma abordagem prática e teórica do Artificial Intelligence Act da União Europeia, Rony Vaizof – Andriei Gutierrez – Gustavo Godinho – Alexandra Kastris, Thomson Reuters-RT, São Paulo, 2024, p. 179)
“… we will examine the governance measures outlined in the AI Act, focusing on Suppliers’ obligations, oversight mechanisms and security practices that are essential to reduce potential threats and promote trust in the use of general-purpose AI.” (Comentários ao EU AI Act – Uma abordagem prática e teórica do Artificial Intelligence Act da União Europeia, Rony Vaizof – Andriei Gutierrez – Gustavo Godinho – Alexandra Kastris, Thomson Reuters-RT, São Paulo, 2024, p. 180)
While reading the text by Eduardo Paranhos and Thomas Côrte Real, I remembered Ulrich Beck’s observations in a seminal work I read over a decade ago:
“Risk society encompasses a tendency toward ‘legitimate’ totalitarianism in defense against danger, which, with the task of avoiding the worst, ends up provoking, as everyone knows is common practice, something even worse. The political ‘side effects’ of civilizational ‘side effects’ threaten the political-democratic system in its domain. It finds itself confronted with the unpleasant dilemma of either failing in the face of systematically produced dangers or revoking, by means of authoritarian ‘supports’ derived from the police power of the State, basic principles of democracy. Breaking with this dilemma is one of the crucial tasks of democratic thought and action, in view of the current future of risk society.” (Sociedade de Risco, Ulrich Beck, editora 34, São Paulo, 2010, p. 97/98)
The risks for those who travel on airlines and embark on a cruise are minimized by the adoption of safety standards in the manufacture and maintenance of airplanes and ocean liners. The education and training of pilots, captains and their crews are also the subject of care. No one would get on a plane if they knew that the pilot was inexperienced or would board an ocean liner commanded by Francesco Schettino, the captain who, in addition to sinking the Costa Concordia, abandoned the ship without worrying about the passengers.
The European Union took the lead in approving the EU AI Act, commented on by Eduardo Paranhos and Thomas Côrte Real in the book mentioned above and partially transcribed. The emphasis they place on building public trust makes sense. But it plays a curious role.
The developers of AIs themselves publicly admit that they can pose a risk to human civilization. Sam Altman said a year ago that the technology he created and is working to perfect could kill us all. Paradoxically, the Open AI founder wants the world to use AI. He supports regulating the technology.
The trust of potential consumers is, without a doubt, the main asset of airline and shipping companies that transport passengers. The risk that passengers run when boarding a plane or an ocean liner is real and must be minimized. This risk, however, will be limited exclusively to those who buy the ticket. In the case of AIs, there is no such limitation on risk.
If a certain banking institution, I will call it Bank A, starts using an AI system, all of its account holders will be subject to risk regardless of whether or not they trust the new technology. If Bank A's AI fails and compromises its liquidity, causing a bank run, the entire financial system of the country could be contaminated. Account holders of other banks will obviously also be exposed to risk even if they chose to avoid Bank A because it started using AI.
In 2008, the financial crisis that began in the US quickly spread to Europe and Japan. Fortunes disappeared, unemployment rose in the northern hemisphere, and confidence in the banking system was only restored with the public bailout of private banks that participated in the subprime financial party. The use of AI in the financial market is already a reality. Will it prevent or cause the next financial crisis?
The risk of something similar to the above-mentioned hypothesis occurring is not just rhetorical. In this case, the obsession of journalists and legal experts with building trust in AI will not only be useless. It is potentially harmful to democracy, as Ulrich Beck warns.
A technology with the potential to cause large-scale damage, negatively affecting even the lives of people living in another country or who refused to use it (who were not even customers of a banking institution that developed or licensed it), is intrinsically totalitarian. Unfortunately, it seems that this potential totalitarianism of AI is being overlooked.
We really need to take three steps back and think carefully before we move one step forward with this technology. But people are doing the opposite. They are rushing forward as if everything will be fine. Will it really be? We'll see...