To improve AI we must help it with its hallucinations

-

The financial sector is wondering how to solve one of the most common problems using generative algorithms that assist its users day in and day out.

By Pablo Velan, CTO and partner of N5.

The models of generative artificial intelligence make an error that for the common people can go completely unnoticed, and for those who experts we have a name: when the herramienta offers erroneous, inaccurate or even completely invented answers, we decide that you are a hallucinator.

The root of this malaise is due to the fact that algorithms are trained with large (very large) volumes of information, and they all learn patrons and associations with this information that may be insufficient, obsolete or inconsistent, and given that they do not have “comprehension ”, you really don’t know what you’re saying.

When faced with ambiguous questions or without sufficient context, the AI’s interpolation and extrapolation capacity to answer comes into play, and there will be generalizations or incorrect assumptions. This may not be a concern for those who use tools as mere entertainment. However, it can generate major problems for companies, where precise decisions are crucial.

These hallucinations can manifest themselves in the form of text, images or videos generated by AI.

Just to cite a concrete example (which we use when we talk about the topic with colleagues) let’s imagine that we want to find out which branch of our nearest bank is. We put ourselves in contact with the IA of the entity, we asked and the answer that comes out is something like this: “Pablo, the bank branches closest to you are X, H and Z”, but instead of enumerating them from your bank, list the competences. This scene, which may seem like a fantasy, is actually real, an analysis of the startup Vectara reports that the hallucination rate of Chatbots varies from about 3% to 27%.

Surely at this point you will be wondering if there is a “cure” or treatment to prevent this evil that affects artificial intelligence and the rotunda response is yes, if it exists. One of them (and perhaps the most important) is in connecting the models to the Internet and integrating them into closed information sources, feeding them only with data from the bank or fintech in question.

Is a large-scale product possible without hallucinations?

Generative AI has become a factor in the daily lives of people and companies, so much so that in the fintech world it has been valued at 1.12 billion dollars in 2023: in addition, the rapid growth of its tasks suggests that it will reach 4 .37 billion dollars for 2027, according to a report from Market.us. This is the reason why the financial sector is planting how it can draw AI hallucinations into products.

Since the team I lead likes to talk to us through experience and that’s what we achieved when we committed to creating the first generation of exclusive AIs for the financial sector. This adventure encouraged us to look for another way to avoid hallucinations in most cases.

OpenAI, the organization behind GPT, has been working on methods to prevent hallucinations in its models. This includes human evaluations to avoid erroneous data, false data and the spread of false news.

This is how we iterated several times until we determined a distributed model that works with multiple AIs together. We rely on a feedback process validating input, processing and output of each user query, and thus achieve the objective that our tool responds with precise, proprietary information, totally isolated from third-party information with a very high rate of success.

AI presents itself as a solution that brings about a paradigm shift in our lives, but like all technology it has its limitations and must continue to evolve to overcome its barriers and imperfections. Fortunately, our technological experts are attentive and ready to respond to any obstacle that appears along the way.

Share this article

Recent posts

Popular categories