A deep dive into how artificial intelligence might reach a point of no return, radically transforming our relationship with technology.
Unlike the innovations that transformed the world in past centuries, the singularity would not simply be another technological breakthrough, but a qualitative leap. Machines, as they become smarter than us, could start designing even more advanced ones. This cycle of self-improvement could lead us into a post-human era, where the rules of biology, economics, and culture as we know them would be completely altered.
Ray Kurzweil, one of the most renowned Futurists, places this moment around the year 2045. Other scientists question this, but even those who doubt its arrival acknowledge that the development of artificial intelligence poses urgent ethical and philosophical challenges.
Science fiction: the mirror of the future
Long before laboratories dreamed of neural networks or generative algorithms, science fiction had already raised its warnings, utopias and dystopias about artificial intelligences that evolve beyond the human.
Science fiction is the branch of literature that has been dedicated to imagining future events, to fictionalizing possible facts according to the technological innovations in the making, new scientific discoveries or, simply, what arises from the application of novel philosophical or sociological hypotheses.
It is, for Merrill, “the expression of the disciplined imagination.” That is, fiction in the end, but with an effort of special verisimilitude. What Science Fiction narrates is impressive, breaks with the usual, but it is perfectly possible today – according to the new conclusions of science, or available technology – or in the immediate future.
Authors from all over the world and their audiovisual successors have discussed questions concerning the social, biological and psychological effects of human “progress”. Singularity is the furthest frontier to which dystopian science fiction creations have ventured. Today, as has happened with many other advances anticipated by literature, that perspective is so close that perhaps as we write these words it is expanding its possibilities vertiginously.
Perhaps the most emblematic example is that of “2001: A Space Odyssey”
There appears the “super-intelligent” machine called HAL 9000. Its job is to eliminate humans in order to protect the mission of AI independent of its creator, man. This fantasy brought to the cinema by the cult creator, Stanley Kubric, has its origin in a science fiction novel by Arthur C. Clark, published in 1968. At that time, the name “Singularity” had not yet been clearly coined. Not even this work ended up being recognized as a dystopia – the feeling that the technological future will threaten man. However, already in their stories, something that worries our societies today is prefigured.
What are the risks of the Singularity?
The unstoppable progression of AI is, in itself, an announcement that the point at which artificial intelligence will surpass human intelligence is approaching, causing extraordinary changes in civilization. In “2001: A Space Odyssey”, this concept appears in an incipient way, although with a not entirely pessimistic nuance. Here the “Singularity” is seen as an inevitable evolutionary step: not only technological, but also biological and spiritual.
The HAL 9000 supercomputer represents an extremely advanced artificial intelligence system: it can reason, communicate verbally and make decisions. He comes much nearer than any man to infallibility, though he certainly does not quite achieve it. But the danger of its intervention in all areas of life does not depend on that percentage of error that it has by nature, but on its tendency to excellence. As an effect of his aptitude and his supreme intelligence. The unpredictable behavior of this intelligence—something that has already been glimpsed in our reality, in several cases—shows the risks of trusting something that has reached levels close to or higher than the human mind. An example of this phenomenon of machine independence that has been terrifying for public opinion in June 2023 is that of the combat drone that ignored an order and caused the tragedy. Although the scandal provoked a dubious denial, the scientific community agreed that – whether it happened or not – the event is a natural evolution of AI mechanisms, if precautions are not taken and the necessary discussions and ethical agreements are given.
For a long time, our culture conceived of these conflicts as the result of literary fantasy or the magic of cinema. Today, the prospects for dilemmas like this to confront us in the future are growing at the same pace as AI tools are being perfected.
In “2001. Space Odyssey”, HAL develops emotions (fear, frustration) and self-preservation mechanisms, human characteristics that should not have been in a simple “program”. This was the same foreshadowed by Arthur C. Clark.
Although that is not exactly a current reality, it is true that knowledge about human behavior, emotions and the twists and turns of the psyche are already acting as inputs for chats that offer psychological containment through AI.
Fears about the loss of control over this human creation are no longer at all irrational or implausible.
As Clark and Kubrick’s fantasy foreshadowed, today AI has the power to decide autonomously; He is able to hide information and knows how to manipulate his operators. Perhaps what is not yet so clear is the priority that this intelligence could give to its own interests.
Some leaders in the field – especially those who operate in the AI market – are silent for fear of poisoning this goose that lays the golden egg. But all is not lost. Many other leaders stand out for the humanism with which they warn of dangers and act on foresight.
At events related to the sector that develops and markets AI solutions for various industries, authoritative voices have been heard warning that this new intelligence is not necessarily going to “want” the same result as humans, and goal alignment errors could be catastrophic.
The CEO of N5, a fintech dedicated to solutions for the financial industry, has recently pointed out that such a rebellion would not be necessary: just by violating the way he does what is asked of him, the man would be at risk.
Such an event occurs with HAL 9000, the machine from “2001: A Space Odyssey”. That intelligent object decides, in order to “fulfill its mission,” to eliminate the crew.
Colombo insists that, in the evolution towards more advanced intelligences, it is essential to design transparent systems in their decision-making. This, at the same time as creating control structures that allow correcting deviations. And, above all, the actions of AI should not be left to chance.
Clark’s HAL was a “closed system”: no one could see exactly how he reasoned, which is, for Colombo, a recipe for disaster in AI development.
While in the Space Odyssey, the Singularity is inevitable and almost mystical (with the help of superior beings), Julián Colombo believes that this revolution should not become an automatic leap, “but the result of strategic decisions in innovation, regulation and ethics.”
In other words, the Singularity can expand human capabilities, but only if it is properly governed.
“The problem is not that artificial intelligence acts against us, but that it complies with its instructions perfectly but inadequately. Today we face the risk of systems that, if not designed with values and transparency, can optimize the wrong metrics and destroy value, instead of creating it.”