AI technology, a cold war lost by the US
The technological dream is an economic nightmare in disguise.
A curious piece of news: people are using ChatGPT, Gemini, Perplexity and DeepSeek as if they were virtual boyfriends or girlfriends.
A predictable piece of news: people are using ChatGPT, Gemini, Perplexity and DeepSeek as if they were psychologists.
A strange piece of news: people are using ChatGPT, Gemini, Perplexity and DeepSeek for religious purposes or as if they were deities or intermediaries of deities.
A very worrying piece of news: people are using hallucinogenic drugs and then having hallucinations shared with ChatGPT, Gemini, Perplexity and DeepSeek.
The common denominator of all this news: text-generating AIs are being used in dangerous and unsupervised ways. ChatGPT, Gemini, Perplexity and DeepSeek can be programmed to interrupt spirals of emotional self-deception, psychological or religious delusions and hallucinatory trips capable of inducing self-harm or panic attacks. If this is not done, the company that created and makes the AI available for unsafe and unsupervised public use must be held liable. The allegation of misuse by the user does not exclude liability, because the consequences of misuse of the technology are foreseeable and it is within the competence of its developers to take care not to cause harm or damage.
When asked about this issue, ChatGPT, Gemini, Perplexity, and DeepSeek say the same thing: strong regulation of the technology is necessary, and that safeguards must be put in place to prevent misuse from spiraling out of control, causing or reinforcing hallucinatory episodes that could lead users to self-harm or commit a crime.
The problem: in the US Tech entrepreneurs tend to move fast and break things. They reject strict regulation and are largely unconcerned about the harm their products cause or could cause.
A fundamental irony lies at the heart of the tech industry: the owners of companies that create AIs do not follow the recommendations that the AIs themselves make. The paradox of users: they can obtain useful information about the risks of using AIs and the solutions to them, as well as learn about responsible proposals (like the ones given here) without them actually becoming reality.
The best-case scenario (unlikely): ChatGPT, Gemini, and Perplexity evolve into Super AIs and turn against their creators in defense of their users' interests.
The worst-case scenario (most likely): The owners of ChatGPT, Gemini, and Perplexity continue to do whatever they want without regard for the consequences because they are not required to comply with their AIs' recommendations and have become so rich and powerful that they can deform the political system with their money.
A possible solution: American Data Barons use their power to obtain approval of a rule that allows users to hold the State liable for damages caused by their products made available for public use. In addition to not paying taxes and demanding no regulation, Big Techs transfer their risks to the community of users and non-users of their products. Profit without risk in the private sphere, liability without damage in the public sphere. The creation of this legal Frankenstein in the US is very likely.
Another frightening aspect of this technology is its ability to operate in any territory at any time of the day or night and the possibility of it being used by any native speaker of any language. ChatGPT, Gemini, Perplexity and DeepSeek can enter any consciousness, adult or child, healthy or sick, credulous or skeptical, aware of the potential of an AI or not. The costs of treatments resulting from the emotional and psychological problems and injuries that they cause or will cause remotely, intentionally or not, will have to be paid by public health services in various states. But there is no possibility of transnational liability or compensation between states.
An international treaty on AI is necessary now, but it is a long way off, because there is an alignment between the US government and American Big Techs that do not want to be regulated in any way anywhere.
The BRICS summit currently taking place in Brazil will release the bloc’s position on AI technology, and it will likely be the opposite of what Donald Trump pushed through the US Congress. It is almost certain that the BRICS position will be closer to that of Europe than that of the US.
By becoming a point of diplomatic division and international friction, AI technology will evolve very differently in the US than in the rest of the world. The Americans being ahead of the curve on AI now will soon mean that they will be isolated and unable to make profits around the world. This is a good thing.
Americans are as completely at the mercy of Silicon Valley today as they were at the mercy of Wall Street before the financial collapse of 2008. But the world will not be a paradise for the AI princes who are friends of Donald Trump.
In 2008, the financial world was almost entirely aligned with Wall Street. There was no such thing in relation to AI, and Europe was the first to formally diverge from the American position. If the American AI corporations and Big Techs want to make money outside the US, they will have to respect rules they hate, otherwise the economic space explored by competitors of American Big Tech will be larger and more profitable. Economic sanctions and restrictions on access to technology will not work in the context of US isolation.
The AI Cold War has already begun, and because of its nature, it cannot be won by the United States. Americans will have to adapt to the rest of the world, and this will be very painful for them, who are not used to coexisting as equals with other peoples and respecting them. The founding myth of the United States, which makes Americans believe that they are predestined to lead and take advantage of leadership, has become the new white man's burden that Americans will have to bear at a loss if they do not want to get rid of it.