{"id":13022,"date":"2024-11-12T14:50:50","date_gmt":"2024-11-12T17:50:50","guid":{"rendered":"https:\/\/blog.n5now.com\/peligros-de-la-ia-como-minimizar-los-riesgos-de-alucinaciones-y-desinformacion-en-la-industria-financiera\/"},"modified":"2024-11-12T15:02:35","modified_gmt":"2024-11-12T18:02:35","slug":"peligros-de-la-ia-como-minimizar-los-riesgos-de-alucinaciones-y-desinformacion-en-la-industria-financiera","status":"publish","type":"post","link":"https:\/\/blog.n5now.com\/en\/peligros-de-la-ia-como-minimizar-los-riesgos-de-alucinaciones-y-desinformacion-en-la-industria-financiera\/","title":{"rendered":"Dangers of AI: How to minimize the risks of &#8216;hallucinations&#8217; and false information in the financial sector"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\">Misleading data poses a threat in sensitive areas such as investment management or credit granting<\/h2>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"892\" height=\"617\" src=\"https:\/\/785-cms-cdn.azureedge.net\/n5cmsblob\/2024\/11\/image.png\" alt=\"\" class=\"wp-image-13006\" srcset=\"https:\/\/blog.n5now.com\/wp-content\/uploads\/2024\/11\/image.png 892w, https:\/\/blog.n5now.com\/wp-content\/uploads\/2024\/11\/image-300x208.png 300w, https:\/\/blog.n5now.com\/wp-content\/uploads\/2024\/11\/image-768x531.png 768w, https:\/\/blog.n5now.com\/wp-content\/uploads\/2024\/11\/image-607x420.png 607w, https:\/\/blog.n5now.com\/wp-content\/uploads\/2024\/11\/image-150x104.png 150w, https:\/\/blog.n5now.com\/wp-content\/uploads\/2024\/11\/image-218x150.png 218w, https:\/\/blog.n5now.com\/wp-content\/uploads\/2024\/11\/image-696x481.png 696w, https:\/\/blog.n5now.com\/wp-content\/uploads\/2024\/11\/image-100x70.png 100w\" sizes=\"(max-width: 892px) 100vw, 892px\" \/><\/figure>\n\n\n\n<p>What would happen if an artificial intelligence (AI) system designed to predict the future of the stock market and trained in a period of economic stability faced an imminent crisis? If it were not trained to recognize signals like this, it might interpret a small increase in trading as a sign of continued growth. The model might also wrongly predict that stock prices will rise, with serious consequences for the market.<\/p>\n\n\n\n<p>If an AI tool that analyzes financial market sentiment based on news and social media posts receives inadequate training, it may misinterpret expressions or context, which could lead to analysis that does not represent the true opinion of the market, leading to investment decisions based on distorted information.<\/p>\n\n\n\n<p>These examples show that as AI becomes more prevalent in the financial sector, it not only opens up avenues for innovation and automation, but also challenges such as so-called AI \u201challucinations,\u201d a term that refers to situations in which AI models generate and disseminate false or misleading information.<\/p>\n\n\n\n<p>In the fintech world, AI is here to stay: it was valued at $1.12 billion in 2023, and its rapid growth rates suggest it will reach $4.37 billion by 2027, according to estimates from Market.us. However, according to analysis by startup Vectara, the \u201challucination\u201d rate of chatbots ranges from 3% to 27%, which becomes a problem for the financial sector, where accurate decisions are crucial.<\/p>\n\n\n\n<p>Juli\u00e1n Colombo, CEO and founder of N5, says AI can present hallucinations that refer to errors or incorrect interpretations of data, which \u201cleads to erroneous conclusions.\u201d Julio Blanco, co-founder and CBO of Zentricx, clarifies that basically \u201cthe result is an invention of the model and is not supported by real information.\u201d<\/p>\n\n\n\n<p>Large natural language models (LLMs) \u2013 explains Weslley Rosalem, senior AI leader at Red Hat \u2013 work based on conditional probabilities learned from training data. \u201cThey generate the next word or token based on the probability distributions of these sequences. Hallucinations occur when the model produces results that are statistically plausible but do not correspond to factual reality. These models capture statistical relationships, but do not have a true understanding of the content,\u201d he explains.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">In the spotlight<\/h2>\n\n\n\n<p>In the financial sector, these hallucinations can occur in several areas, such as credit analysis, where a model can assign a customer \u201ca risk profile that does not reflect their true financial situation, potentially resulting in inappropriate credit granting decisions\u201d, says Colombo.<\/p>\n\n\n\n<p>Blanco adds that in the case of customer service, there may be search engines for consultation (instead of \u201cfrequently asked questions\u201d) and the search engine may make incorrect recommendations about services or their costs. In turn, they may be so delusional that they do not resolve customers\u2019 doubts at all.\u201d Models can also hallucinate when generating financial reports, if they perform complex calculations to estimate or predict trends: \u201cMore than predicting, they would be guessing a future without any real basis,\u201d he points out.<\/p>\n\n\n\n<p>In automated financial advice, hallucinations can recommend inappropriate investment strategies based on faulty data or algorithms. They can also cause problems in fraud detection and risk management. \u201cHallucinations can lead to false positives or negatives, compromising the effectiveness of identifying fraudulent activity or assessing risk,\u201d says Rosalem.<\/p>\n\n\n\n<p>The nature of these industry hallucinations can lead to significant financial losses, reputational damage for institutions and customer dissatisfaction. \u201cIn addition, decisions based on flawed analyses can increase the risk of fraud or regulatory non-compliance, exposing companies to regulatory sanctions. It is crucial to implement validation and monitoring measures to ensure that AI systems operate accurately and transparently, thus minimizing the associated risks,\u201d emphasizes Colombo.<\/p>\n\n\n\n<p>Similarly, hallucinations can lead to inefficient decision-making. \u201cHallucinations can compromise the quality of strategic decisions, affecting the institution\u2019s competitiveness in the market,\u201d Red Hat adds.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Minimize the risk<\/h2>\n\n\n\n<p>Zentricx says that the main way to minimize hallucination is to ensure that the information used is reliable. \u201cIf the model is given false information, it learns to repeat the same falsehoods. \u201cWe always recommend a data consulting project before developing a complex AI model.\u201d<\/p>\n\n\n\n<p>Regarding data quality, Blanco highlights that it is a \u201ccentral\u201d point to reduce hallucinations. \u201cIt is necessary to ensure that AI models are trained with diverse, balanced and well-structured data. Also perform stress tests on the AI \u200b\u200bmodel\u201d, he highlights.<\/p>\n\n\n\n<p>Red Hat suggests that strategies like RAG (Retrieval-Augmented Generation) or RIG (Retrieval Interleaved Generation) minimize the effects of hallucinations on LLMs, since they \u201ccombine language models with information retrieval systems.\u201d The LLM is fed with specific information retrieved from relevant databases or documents, which allows the model to generate more accurate and up-to-date answers and reduces the exclusive reliance on training data, which may be outdated or incomplete.<\/p>\n\n\n\n<p>Open source methods and tools such as TrustyAI and guardrails can be implemented with large language models (LLMs) to mitigate hallucinations and improve reliability.<\/p>\n\n\n\n<p>TrustyAI is a suite of tools that aims to improve the explainability and trustworthiness of AI models, providing capabilities to interpret model decisions, identify biases, and monitor performance. \u201cBy applying TrustyAI to LLMs, it is possible to better understand how the model generates responses and identify possible hallucinations or misinformation,\u201d says Rosalem.<\/p>\n\n\n\n<p>Guardrails, on the other hand, are mechanisms that impose restrictions or checks on the outputs of AI models. They can be implemented to ensure that responses fall within a certain scope, follow specific policies, or are factually correct.<\/p>\n\n\n\n<p>Colombo further adds the need to implement human reviews \u201cfor critical data and sensitive responses can increase accuracy, especially in areas such as risk and compliance.\u201d<\/p>\n\n\n\n<p>At N5, they developed the Fin Sky solution, which combines two effective approaches. First, they adopted a distributed model, which uses multiple AIs working together. They implemented a feedback process, where they continuously validate the input, processing and output of each user query. \u201cThis allowed us to reduce the hallucination rate to 0.3%, compared to the rate of 3% to 27% observed in chatbots, according to data from the startup Vectara\u201d, explains Colombo, concluding by highlighting that their AIs are trained with data exclusive to the institution, avoiding queries to random information on the internet, which further increases the accuracy of the responses. \u201cThis combination of methods ensures that the solutions are reliable and secure, considering a sector where information plays a critical role.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Misleading data poses a threat in sensitive areas such as investment management or credit granting What would happen if an artificial intelligence (AI) system designed to predict the future of the stock market and trained in a period of economic stability faced an imminent crisis? If it were not trained to recognize signals like this, [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":13011,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_seopress_robots_primary_cat":"none","_seopress_titles_title":"","_seopress_titles_desc":"","_seopress_robots_index":"","footnotes":""},"categories":[214,203],"tags":[],"_links":{"self":[{"href":"https:\/\/blog.n5now.com\/en\/wp-json\/wp\/v2\/posts\/13022"}],"collection":[{"href":"https:\/\/blog.n5now.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.n5now.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.n5now.com\/en\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.n5now.com\/en\/wp-json\/wp\/v2\/comments?post=13022"}],"version-history":[{"count":2,"href":"https:\/\/blog.n5now.com\/en\/wp-json\/wp\/v2\/posts\/13022\/revisions"}],"predecessor-version":[{"id":13029,"href":"https:\/\/blog.n5now.com\/en\/wp-json\/wp\/v2\/posts\/13022\/revisions\/13029"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blog.n5now.com\/en\/wp-json\/wp\/v2\/media\/13011"}],"wp:attachment":[{"href":"https:\/\/blog.n5now.com\/en\/wp-json\/wp\/v2\/media?parent=13022"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.n5now.com\/en\/wp-json\/wp\/v2\/categories?post=13022"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.n5now.com\/en\/wp-json\/wp\/v2\/tags?post=13022"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}