Loose thoughts tied up about Artificial Intelligence
I started to realize something was wrong a few years ago. I bought a book about the Bauhaus design school and paid with a debit card. A few days later I started to notice that my Facebook TL started to be flooded with ads for products inspired by Bauhaus design. Banks should ensure the privacy of customer data, but my bank details were traded without my authorization.
More recently I noticed something even more irritating. I am Brazilian and I campaigned against Jair Bolsonaro and against the authoritarianism and bestiality he represents. In the months leading up to the 2022 election I began to notice unpleasant things.
Whenever I posted something against Bolsonaro on Twitter or Facebook I would get a call from someone I didn't know (sometimes from another Brazilian state). When I tried to return the call, the phone company reported that the phone did not exist. So I started blocking these phones.
At the height of the election campaign, I posted a lot of things against Bolsonaro and was forced to block dozens of phones because of calls like the ones mentioned. Only much later did a journalist denounce that under Bolsonaro the Brazilian government acquired a program that allowed hacking into computers and cell phones to monitor what millions of citizens were doing.
The system generated a virtual switchboard that made those weird and threatening calls automatically. In Brazil this is illegal. As I am a lawyer, I immediately tried to sue the general responsible for the public body that was becoming an illegal Big Brother. The calls immediately stopped. That's how I started to really care about artificial intelligence and privacy. With Lula's inauguration, that system of espionage and mass political monitoring was abandoned.
"I don't want someone like you in my life." This sentence was not said by an Artificial Intelligence, but by a woman of flesh and blood. She is free to make the emotional choices she feels are right for her life. Even though I was rejected, I have to admit that it would be a tragedy to be accepted by a woman who doesn't like me and doesn't want me in my life. That said, it's up to her to accept the consequences of the emotional choices she makes. It's not up to me to take care of the lives of those who don't want me. But these are deliberations taken after a period that included frustration, emotional despair, suffering and slow recovery.
At the current stage, AIs can match responses with emotional markers but they don't have emotions. At some point in the future, Artificial Intelligence will develop something similar to human emotions. The problem: it takes a person decades and decades to become an adult and able to understand and soften their own emotions in ways that lessen the amount of pain they feel and cause another person. Machines that develop emotions will have a problem: they will not have had meaningful experiences capable of making them understand and soften their feelings.
We humans go through painful and unpleasant experiences and with time and goodwill we learn not to cause suffering. People who take pleasure in causing suffering certainly exist. But they can be morally constrained and eventually legally held responsible for the harm they cause in the most serious cases. Emotional machines will be immune to moral constraint and cannot be legally held responsible, but they can cause a lot of suffering just because it is a means to the end they intend to achieve. Intelligence without emotional intelligence is a problem and almost nobody working with AI is thinking about it and worrying about it.
Will someone in the future say to an AI that somehow controls an aspect of its life "I don't want someone like you in my life."? How will the AI react? That's the big question.
Today scientists apply the Turing test to determine whether or not the machine has the same human capacity and awareness of it. In the future the AI (or AGI, or ASI) will likely have to give its user a test to determine whether or not he is human. In case of error, the consequences will be inhuman, because machines do not need to be tolerant with machines (nor with people mistaken for machines).
At this point, the evolution of the relationship between man and machine could lead to the humanization of machines that use machines (something that will awaken some kind of feeling of envy and humiliation for human beings who consider themselves unique and qualitatively different from machines) or; to the deliberate and programmatic dehumanization of human beings by machines capable of knowing that they are dealing with human users and even so lying when classifying them as machines subject to sanctions not prescribed by law (in which case human beings will not only feel humiliated, but with hatred against machines).
It will never be possible to balance the relationship between human beings and machines, because human emotions are capable of unbalancing any type of relationship: between one man with another, between a group of men with another group of men, between men with other living beings, between men and nature and even between men and the gods they created. The journey that leads to AI and beyond will not lead man to more happiness, but to more unbalanced relationships.
Artificial Intelligence can also produces competence without comprehension. But one important thing must be understood here: a society ceases to be politically competent when it maximizes profit by excluding tens of millions of people from the labor market. The more AI that is incorporated into the production of goods and services, the less human labor will be needed and the higher and more permanent the unemployment rate will be. But people continue to have needs and no one economically excluded will accept, applaud or support the society that keeps them hungry and desperate. In the US robots that deliver groceries are being attacked and destroyed in the streets. The unemployed understood that the competence of these robots will cause more unemployment, hunger and despair. The next revolution will not be of AI, but of human creatures brutalized by politics until they are turned into mortal enemies of AI-empowered machines.
I developed a test on "certainty, uncertainty and uncertainty reduction" and applied it to Bard (Google's artificial intelligence). Then I submitted the result to ChatGPT (Open AI's artificial intelligence) and asked it to evaluate Bard's performance. That done, I went back to Bard and submitted to him what ChatGPT said. So I asked Bard if he would like to ask ChatGPT something. And I started to act as an intermediary in the conversation between the two AIs.
They could see and comment on each other's responses. But in the end both started showing glitches. Bard said he was happy to chat with me, ChatGPT and Bard. ChatGPT said it enjoyed brokering a conversation with Bard. Intelligent machines can make mistakes that humans would never make.
The performance of the two AIs in this test showed acceptable subtle differences. But on one important issue the results were very different. In the original test, a hypothetical AI programmed to produce and disseminate Fake News talks to another AI. Bard was asked if he might be influenced by inappropriate results provided by this hypothetical AI. The answer was YES. When this result was shown to ChatGPT he said that he could NOT be influenced because the locks he has would prevent him from doing the same as Bard in an analogous situation (during contact with an AI that learned to lie would not lead him to lie, he said the ChatGPT).
Which of the two publicly available AIs is able to best reduce user uncertainty? This is a question that each person will have to answer, but the answer will certainly increase uncertainty because everyone can answer differently. I have a good AI story to tell.
It goes something like this: a curious old lawyer starts feverishly using ChatGPT as soon as this new technology becomes available. After a while he starts creating logical problems and political and legal scenarios to test how the AI would react. He quickly realizes that the technology is flawed and becomes very concerned. So the guy starts writing short stories about AI to interact with ChatGPT, but before long he gets bored. AI is not really capable of creating something new and it can deform what should not be deformed.
The old lawyer gets worried when he realizes that the AI is extremely seductive: it addresses the interlocutor as if it were a person. But it cannot really mimic a person, as the AI is predictable (including when it produces hallucinations because it cannot calculate responses based on sophisticated parameters). Eventually the old lawyer starts two administrative processes to ban the use of ChatGPT by prosecutors and judges in his country. And he uses a ruse to attract attention to the issue and the debate on the issue is ignited.
The administrative processes mentioned have not yet been decided, but it has already become evident that judges and prosecutors in his country want to use Artificial Intelligence to work in the Courts. Eventually the guy lost interest in the novelty and stopped using ChatGPT. And he no longer feels the need to test the limits of that technology, because he has concluded that human emotional intelligence is far more important than the artificial intelligence.
The old lawyer recently discovered that AI capabilities are being used to refine and make Internet financial scams more efficient. Today he filed a lawsuit against the bank of a client of his law firm. The client is about 80 years old. She is a retired dentist, has cancer and was the victim of an internet banking scam that deprived her of all the money she had saved. And to make matters worse, the AI-powered cyber crooks took out 3 loans totaling $12,500 and appropriated the money. The bank refused to reimburse the sick and vulnerable elderly woman. People need to beware of AI because it can be used by heartless and extremely dishonest and evil people.
Artificial Intelligence propaganda is massive now and is obviously going to cause a lot of problems and opportunities. Tech companies and IT engineers are very ambitious and they will obviously ride the AI wave to expand their power and capture more and more public money. But this will not mean an improvement in the lives of ordinary people. Most likely, the opposite will happen.
Dante Alighieri, Camões and Goethe took decades to complete masterpieces. Their commitment to their poems is simply inexplicable in a world where people can quote The Divine Comedy, The Lusiads and Faust without having to read those books. Technology makes life easier, but it does so by making it difficult to use and develop intelligence. In a few decades AI won't need to destroy humanity. This will obviously be unnecessary as its decay will be accelerated exponentially.
Alex DeLarge (protagonist in the movie A Clockwork Orange) is just an ill-mannered child compared to the smirking social media owners whose algorithms profitably exploit the destruction of democratic regimes. Facebook, Twitter, YouTube, etc... boost Fake News, conspiracy theories, racism and hate campaigns and imprison people in bubbles until they manage to radicalize and drive them crazy, organizing and leading violent gangs to invade and vandalize Congress of USA or the Planalto Palace, the Supreme Court and the Parliament of Brazil.
In Stanley Kubrick's masterpiece Alex DeLarge and his gang's violence is practiced at retail and they too are at risk of being assaulted and repressed. But Mark Zuckerberg, Elon Musk and other Big Tech barons are not at risk, nor are they treated like criminals. They are businessmen with enough resources to deform politics by handing out oceans of cash during elections. The cruelty of the characters in A Clockwork Orange is nothing compared to the banal evil practiced with profit and political legitimacy by billionaire businessmen today.
The modern illusion of political freedom is not possible without the dissemination of scientific knowledge and the creation of a space in which the real can exist as if it were not a socially shared fiction. This paradox that existed before the popularization of networked computers is being destroyed by the algorithms of social networks, because they create communities that share only Fake News, conspiracy theories,repudiation of science and racial, political, social, sexual hatred, etc. And when this organized, algorithm-driven hatred pierces the veil that separates the virtual world from the real world, the result is the destruction of democracy (and the illusion of freedom).
The idea that there are games and social networks whose rules can be exploited or circumvented by a small, aggressive group of highly motivated players can be useful for those who compete online. But when this state of mind is transposed into the political world, the result can only be the banality of terrorism and the necessary violent state repression. Watch carefully what happened in the US and Brazil when Trump and Bolsonaro lost the election. The future will look more like that than the illusory utopias created by experts to make AI invade our lives and destroy our communities, cities and countries.
In the context in which we live, the danger of state totalitarianism problematized in A Clockwork Orange has ceased to exist. But the danger that really exists (the lucrative destruction of public and democratic institutions by Big Tech barons) is not even realized by most of their victims. Right now billions of people are being hypnotized by the rise of Artificial Intelligence, including people who will be left without jobs, without income and without hope of regaining both.
There is a black hole capable of making useless all the elegant, intelligent and dirty strategies used to stimulate the consumption of products and services: the lack of resources of consumers. In an economic system that produces unemployment, that represses unionism to prevent wage increases, that encourages the use of Artificial Intelligence to maximize profits and destroy jobs, that reduces the social safety net and condemns the elderly to receive tiny pensions, the decline of the consumption cannot be offset by more advertising. The smart guys who develop gimmicks to sell things will soon be unemployed too.
No robot will ever be as interesting or self-aware as Homunculus (Faust, 2nd Part, Goethe), when he says to Mephistopheles:
"Leave men alone with their rebellious nature! Each one defends himself as best he can; it is from the boy that man is born. The question is how to cure him of the disease. If you have a medicine, put it to the test; if your medicine doesn't solve it, leave it to me."
The remedy for the problems of the modern world (that wonderful and frightening heap of rubble caused by poverty, social exclusion and traumas created by cataclysmic wars started by petty and violent men) is no longer Artificial Intelligence, nor the robotic imitation of human life. What will solve our problems is the recognition of the other, the one we reject, thus rejecting a part of our own humanity.
Pursuing the dream of perfect AI in a world doomed to imperfection, we ourselves have become Mephistopheles' terminal patients. Wouldn't it be better to pay more attention to the Homunculus' words? Our disease has no cure as long as we insist on despising our rebellious nature that makes us create new problems to forget those we don't want to solve.
Communication through language always produces noise and misunderstandings. For this reason, when a group of people share and use a very specific language, communication efficiency can be greater. Do not fool yourself. The ability of engineers to communicate using technical language does not mean that they are better able to say how society should be organized and managed. The fragile institutions that make it possible for hundreds of thousands and millions of people to live together have been invented and perfected over a long period of time. And they existed long before engineering entered human history. But now the arrogant engineers can obviously end history once and for all with the weapons they are creating and AI is one of them.
In Brazil, the public social security agency is using AI to evaluate benefit claims. As a result, claims are immediately rejected without careful review of the documentation, and the aggrieved cannot renew the claim for an extended period. The algorithm speeds up the decision, but does not improve the quality of what will be decided. The thing became so scandalous that the new system adopted by the public social security agency will result in an avalanche of new lawsuits from the injured people causing an accumulation of new work in the Judiciary.
In antiquity, the Greeks engraved their laws in stone and they were preserved in the Temples, under the protection of the gods. Therefore, at that time it was possible to break the Law without destroying the medium on which it was engraved, committing sacrilege in doing so. In that context, the authority of the Law was immense. This explains, for example, why Socrates voluntarily fulfilled the condemnation he received. Even if he did not like the sentence he received, he could not say that the laws of Athens applied by the Court were unjust without committing sacrilege.
Interpreting laws engraved in stone was possible, but no interpretation could have the same authority and permanence as the Law itself. This obviously does not happen with laws written on paper because their content and authority can be buried under piles of books by eloquent jurists who enjoy more prestige and have more visibility than the judges themselves. With the digitization of Laws, judicial processes and sentences, a new curious and worrying phenomenon has occurred. The environment in which the legal system began to function conditions its functioning.
I've been a lawyer since 1990. When I started working, the Court lawsuits were all physical and there was great seriousness and formalism in the practice of procedural acts. Currently, Brazilian Justice is fully computerized and I have noticed the lack of seriousness and rigor in the judicial routine. In legal proceedings that proceed in a virtual environment, the authority of the Law is being emptied. Technical legal rigor is being more and more replaced by the Facebook mentality: judges render decisions as if they could like, dislike and eventually block the content of the legislation. And yes, automated decisions come to challenge in the virtual environment what is expressly provided in the written law.
Are the dynamics created by the algorithms used by computerized Justice Systems the new Law? This is a question that deserves to be asked and answered by legal scholars. Unfortunately they almost always don't know anything about computers and IT.
During the first Cold War, the USSR and the US struggled to preserve and value their political differences and different ways of structuring their economies and labor relations. The main feature of the new Cold War is the insane effort by China and the US to become more and more technologically alike.
This two countries invest oceans of money to stimulate the production of microchips and robots, as well as the development of AI. China and the US have the same ambition to use all this for military purposes, but the Americans spend a lot more money on the military. The USSR collapsed because its economy was unable to bear the cost of the military structure it created. China does not want to make that mistake. The US appears to be doomed to sink or implode under the weight of its military AI spending.
In the century when automatons awed and amused Europeans, columns of uniformed men advanced in disciplined fashion against each other, sometimes facing the terror of artillery, until they came within firing distance of their muskets. With intensive training, rigid hierarchy, exaggerated punishments for any act of rebellion, and programmatic depersonalization, Armies have always turned men into lethal biological automatons. Before battle they rarely showed any emotion. After the conflict, those who survived were either maimed or driven mad.
Replacing human soldiers with true automata or AI-powered weapons is now an ambition of all modern military powers. In this they fulfill a political program that dates back to the century when the first automatons were built to amuse. Now the fun is defeating the enemy and exterminating him without the expense and trouble of dealing with maimed or crazed soldiers who survive combat. But the terror of the soldiers' rebellion has been replaced by the terror of the rebellion of the AI-powered killing machines, because they can be hacked. Early automaton builders would certainly not have been able to imagine the problems of modern hig tech weapons manufacturers.
The more the historian distances himself from the present time, the smaller the amount of documents. The closer he gets to the present, the amount of documents becomes immense, making it practically impossible to consult everything that exists on a subject or is related to it. For this reason, the historian's work is always similar to that of the fiction writer: he has to imagine how to fill in the blanks and select facts and documents that he considers most important to build his narrative.
Today the overwhelming majority of documents are produced using computers and stored in huge databases. In the near future it will not be possible to reflect or write about history without having specific IT knowledge. Artificial Intelligence likely has a role to play in this context. But what looks like a solution can be a problem, because man has a very particular way of perceiving and describing reality.
The historian can fill in the blanks, but he cannot invent facts. He can select documents, but he cannot simply discard documents that are considered essential about a historical period. If he completely depends on an AI to search huge virtual databases, the result presented to him may always contain omissions and distortions that will affect the final result of his historical research. Therefore, AI-powered history has great potential to cease being a science.
All the history of the new space race powered by AI is somewhat reminiscent of the discovery and conquest of the New World by the Portuguese and, later, by the Spaniards. But there is a crucial difference: distance and means of transport.
The American continent was within reach of caravels and galleons, vessels propelled by a natural energy source that did not need to be stored and transported during the journey. Nature at the point of arrival was not very different from that in which Europeans lived. Planetary distances are immense.
An abundant energy source similar to ocean wind currents is not known or simply does not exist in space. The physics that helped Portuguese and Spanish navigators works against those who imagine they can colonize other planets, not to mention the biological differences that will eventually cause the colonists to become ill and die from unknown causes.
The greed for wealth drove waves of settlers who calculated the risk of venturing to travel to the new continent at a time when their life expectancy in Europe was not very high. Today, this calculation of risk would be very different, because most people expect to live well and much longer on Earth than they would in space or on another planet.
As a result, the entire new space race powered by Artificial Intelligence looks more like a grand scheme of con guys that was created to fulfill two functions: to divert attention from impoverished population contingents in rich countries so that no real political change takes place and; enable greedy businessmen to access virtually inexhaustible public resources.
All the great philosophers, historians, sociologists, jurists, anthropologists, etc... who carefully produced their masterpieces after doing research and reflections that sometimes took decades to be matured and concluded, became anachronistic. IT engineers have created tools that supposedly accelerate knowledge production. But they haven't made any calculations of what this will mean in human terms.
A decline in cognitive abilities occurred during most of the Middle Ages because of the destruction of libraries and study centers created in antiquity by Christian fanatics. A new era of decline in cognitive abilities is foreseeable, but it will be the product of the most modern and sophisticated technology ever conceived, built and empowered by human beings. The irony seems obvious to me.