What are hallucinations produced by Artificial Intelligence? They are inexplicable errors in the answers this new technology offers.

Por Gisela Colombo.
The growing concerns regarding these failures that artificial intelligence throws up are accelerating and are beginning to be the main evils that specialists must battle.
Although we are closer to solving them, because we increase their registration daily, however, the solution may not be so easy or immediate.
It is not just about innocuous errors of reasoning or small errors of artificial logic. AI hallucinations can have serious effects on users. Failing an exam would be the least of the conflicts. Suppose a bridge over the River Thames depends on the ability of an intelligent system to detect various signals and determine their opening in order for a ship to navigate a stretch of the river. Suppose also that the ship is waiting for it to open and the AI that directs it did not interpret the data correctly. What would happen?
Observed from a light perspective, an error in the execution of AI would be a minor imponderable, a kind of obstacle that cannot damage the relevance of a technology like this. However, AI is not only an instrument to streamline processes or facilitate tasks. It concerns, in many cases, much more fundamental issues.
An AI model can predict events that will not happen, but will probably drag many into completely wrong decision-making. Impossible to quantify. An AI model used to detect fraud can flag a transaction as fraudulent when it is not. The effects? Incalculable. The recent use of AI to diagnose diseases could, with a simple mistake, lead to irreversible losses.
The question is, then, what is wrong with these technological responses? It is, for the moment, mysterious. How mysterious is the workings of human intelligence.
Some mysteries of the human psyche also elude univocal causes. Among the phenomena of experience that science does not succeed in explaining definitively, as would be the case of “déjà vu”, there is another that has popularly generated a lot of interest. It is what in colloquial language has been called the “Mandela effect” and consists of a false truth that is imposed on a large scale, and in an inexplicable way.
The name of the phenomenon responds to one of the most emblematic cases that arose globally: the death in prison of Nelson Mandela. This event never happened, however, a very large number of people say they are convinced that this was indeed the end of the activist. The motives by which such errors become true for many constitute the genuine enigma that attracts and disconcerts.
But the “Mandela effect” is not the only one of its kind. Similar phenomena tend to arise even more strongly in the field of the arts. Do we forget that the image of Hamlet, the protagonist of the play of the same name written by William Shakespeare, with Yorik’s skull in his hands while pronouncing that exemplary soliloquy of “To be or not to be/ that is the question…” never happened? However, the symbolic synthesis that culture makes spontaneously brings together the speech that speaks of the meaninglessness of life with a scene whose central object refers to a similar metaphorical content. Yorik was a kind of jester of King Hamlet, father of the protagonist, whose remains he finds in the cemetery. The skeleton unleashes in him the reflection on the expiration of life, starting with the figure of the clown. It never ceases to be the reference to the comedian. But in the unconscious, both scenes merge. Perhaps a publicist, a theater producer, a designer or an artist who was in charge of the posters of the play, could have proposed it. But if many people conceived it in the same way, it is because the creator of the fusion penetrated deeply and found a fact perceived as true.
In “Casablanca”, the oral tradition involves a phrase that is never said in the film: “Play it again, Sam”. Perhaps the same phenomenon occurs in proposing the most emblematic commentary of Sherlock Holmes as “Elemental, Watson”, ever uttered in the original texts. The same thing happens with “Ladran, Sancho, señal de que cabalgamos” put into the mouth of a Quixote who became independent of Cervantes, who never wrote that phrase. The speech seems to come from Goethe: ‘in search of fortune and pleasures / but always behind they bark at us’; or from a proverb of Turkish origin, which Rubén Darío would later adapt. But of Don Quixote, no news.
If erroneous beliefs that have never been proven have been universally contagious, as in the “Mandela effect”, how can we avoid the same happening with the conceptions of the future world?
Experts on the subject of AI have spoken out on the matter.
Julián Colombo, CEO and founder of a leading software company for banks and insurance companies, has highlighted “the need to implement human reviews for critical data”. That would mean that alongside the AI that raises the bridge over the Thames will be human eyes watching that the data is verifiable in reality and that the decisions of the technology take into account the laws of traditional physics. For example, “that matter is impenetrable”.
This measure will be the best insurance against negligence. The firm he directs, in fact, is implementing two AI assistants, named Alfred – alluding to Batman’s butler – and PEP – in honor of the soccer technical director to whom many attribute the excellence of Lionel Messi. Both intelligences are intended to assist who will be the “AI Commander”, an irrevocably human position.
Colombo even introduces the relevance of sensitive responses, more the children of intuition and emotion, than of logic. This, far from complicating things, can grant the human subject “increased accuracy, especially in areas such as risk and compliance.”
The results seem to be encouraging. From a spectrum ranging from 3% to 27% of hallucinations, according to Vectara, with the recommended measures it is significantly reduced.
Does the hallucination factor disappear? It does not seem credible. Perhaps because those who have thought about AI still believe that Mandela died in prison.
Artificial intelligence works thanks to data collected at scale that comes from human operations. It is the sum of everything already experienced, although categorized according to the most common and the most convenient. If so, it should not come as a surprise to any of us that AI produces hallucinations as well.
