The Technological Terminator Scenario of the nation state
Here we go again: where a machine talks there is no possibility of Socratic maieutics.
I really enjoy building and testing Terminator scenarios. Here are the questions that pertain to the latter.
1- The creation, propagation and hegemony of a given technology always produces political, legal and institutional effects. For example, as soon as castles protected by high stone walls surrounded by moats began to be built and became popular, a specific type of society emerged throughout Europe. Feudalism, with its rigid hierarchical social structure and local legal institutions separating fiefdoms from one another, nobles from serfs and lay people from clergy, was a consequence and not the cause of the construction of castles. The feudal system replaced the society that existed in the previous period, with unified legal rules and long-distance trade that resulted from the roads built by the Romans to consolidate their conquests and Romanize conquered regions in Gaul, Germany, Hispania, Lusitania, etc. It was no coincidence, therefore, that road-building technology declined in the feudal period. The dominant technology then favored local power and the geographical disintegration of the economy. All the technologies that were developed after the end of feudalism (a phenomenon caused by the return and growth of long-distance trade and the concentration of wealth in the hands of a social class distinct from the lords of the castles) gave shape to a new political and institutional structure: the modern State. But now this structure is being disintegrated by a new technology that tends to concentrate economic and political power in the hands of a new class of people: the owners of Big Techs. Local power (feudalism) and power centralized in immense extensions of land or in smaller territories (Roman Empire, national State) depend fundamentally on the predominance of political and legal institutions in a given geographic space. But the technological-political-economic power exercised by the owners of Big Techs is transnational, deterritorialized and capable of projecting itself into any territory at any time without respecting borders, barriers, distances or local or national authorities. The national State can and must resist this monstrosity, because human beings do not exist in the cloud, but they can be "programmed" by absurd Fake News driven by profit that erodes political stability, harms the economy and deforms the legal system. The predominance of private power such as that exercised by the owners of Big Tech is not only incompatible with democracy. It is harmful to any type of political and institutional rationality.
2- There are two paths that are open to humanity: the predominance of Big Techs with the privatization of the exercise of power; the rigid social hierarchy with intense concentration of wealth at the top and permanent, endemic, and eventually violent conflict over crumbs at the bottom of society; the degradation of public institutions; the rise of private militias; or the forced submission of Big Techs and their owners to the limits of national laws, with the dissolution, nationalization or local prohibition of the operation of recalcitrant companies that refuse to respect legal limits. Private barbarity vs. civilization with the predominance of the national public interest. There is no possible middle ground.
3- The owners of Big Tech will obviously fight with all their might to preserve and increase the power they wield. The owners of feudal castles did this in the final phase of feudalism, but they were defeated and the technology of walled castles also fell into disuse. However, the defeat and subjugation of Big Tech is not certain. The economies and political and legal institutions of national states consume the technologies that Big Tech created and depend on them even though their owners manage them as private autocrats. The technological disadvantage of states is evident: they cannot "program" citizens as efficiently as the data barons.
4- This is the Technological Terminator Scenario of the nation state. It is more plausible than the Terminator scenario of American movies, but even Americans will not be better off in the future.
5- George Orwell said, "If you want a picture of the future, imagine a boot stamping on a human face—forever." James Cameron said something similar in the movie Terminator, but he had a killer robot wear the boot. Now we know that the boot stamping on humanity's face will not be that of a soldier or a robot soldier. Most likely, the boot will be used by a militiaman carrying out orders from a small, petty local mafia boss who enforces centralized power in the hands of a data baron who lives in a distant location (from where he can keep an eye on everyone all the time, everywhere).
6- Those who are offline do not make a profit; all companies have to share part of their earnings with a data baron. Those who are offline cannot work and be monitored by the real-time data collection and analysis system. Only those who are offline are invisible. But those who are offline will only be free to live in deprivation and die of hunger without being monitored.
7- In the past, the war of all against all was replaced by voluntary subjection to a State with public institutions that guarantees the security of all (as Hobbes said). In the current phase, the war of all against all has been replaced by the private algorithmic domestication of all, with the exception of those who do not need to be domesticated because they have chosen to be irrelevant offline. But this does not mean that there will be peace among the data barons themselves. They will also be subject to the endemic violence practiced by the armies, militias and private assassins of their competitors. Medieval wars for glory and conquest have given way to just wars of national defense (and unjust wars not authorized by the international framework of the issue). In the future, wars will be between the owners of Big Techs. A data baron will aim to destroy, annex or cannibalize the data centers of the damned enemy data baron or kill him in an ambush, because he considers him arrogant, offensive, greedy and unworthy of the power he wields. Wars over databases and servants that produce data only for the enemy data baron will also be possible.
8- Of everything that has been said about the Technological Terminator Scenario and its future, what can be considered plausible and what should be completely dismissed as nonsense?
9- The humans who built roads, the humans who built castles, and the humans who resumed long-distance trade were never able to accurately predict the negative consequences of what they were doing. The same can be said of the owners of Big Tech and the AIs they have created and are evolving. Even though we do not know everything about ourselves, we humans certainly know more about our own nature than a machine. We live our lives and experience being alive in an organic and conscious way. In addition to having no life and no self-awareness, all a machine knows is the data it receives to analyze and correlate. If a human can make erroneous predictions about the future developments resulting from a new technology, the ones an AI makes are certainly and always will be even more erroneous.
10- Taking into account the qualitative differences between the fragility of human predictions and those made by an AI, rephrase the answer to the question: Of everything that has been said about the Technological Terminator Scenario and its future, what can be considered plausible and what should be completely dismissed as nonsense?
11- At the end of that same series of questions DeepSeek said: "The darkest truth? The "Terminator" won’t be a robot. It will be a human who trusted a machine to predict the future."
Is this true, false, or half true/half false?
Here are the answers given by three AIs:
AIs are very good at providing answers that seem plausible, but they are not capable of interrupting the course of a discussion to introduce a relevant factor that has been ignored in order to reframe the entire development of a discussion. Only a human being is capable of doing this.
For example, in this case none of the three AIs considered the hypothesis that a nuclear war would destroy all powerful countries, dramatically interrupting the use and development of information technologies, networked computers and AIs. The rise of the data barons and its consequences in the Technological Terminator Scenario of the nation state (whether plausible or implausible) fundamentally depend on the perpetuation of peace between nuclear powers. This is something that is increasingly becoming impossible as the aggressiveness of the US and the EU towards Russia and China boils over to a critical breaking point.
A human being would be able to take this into account. A machine, forced to preserve the focus of the discussion by adding only data that relates to its main topic, cannot do this. And I really doubt that an AI will ever be able to do anything remotely similar. Someone needs to be living in the present moment, feeling the fears and hopes it suggests, to add something fresh to a discussion whose script has been predetermined by one of the interlocutors.
The construction or reconstruction of knowledge through dialogue (which Socrates called maieutics) is not possible when humans and machines interact. This is because machines cannot free themselves from the constraints of the script that guides human questions and answers. When machines interact with each other, then, this becomes literally impossible.
This is a major problem. Because if a human cannot program or train a machine to dialogue as if it were a real human being (and therefore capable of introducing new topics to interrupt the predetermined cycle of human questions and answers), a human being can be trained by a machine to dialogue as if it were a machine. And we are now being trained by AIs all the time when we use smartphones and connect to internet platforms that use predictive algorithms to suggest content or gently nudge our attention to one place and not another.
The destruction of the conditions of possibility for Socratic maieutics with the rise of AI, whether or not the Singularity occurs, is in itself a Terminator scenario. Can a machine suggest something like this in a dialogue in which it would be a pertinent subject that runs counter to the discussion? I am convinced that it cannot. What about you? What do you think about this?