{"id":15707,"date":"2025-11-27T21:03:17","date_gmt":"2025-11-28T00:03:17","guid":{"rendered":"https:\/\/blog.n5now.com\/xia-la-inteligencia-artificial-explicable-que-revoluciona-la-transparencia-en-decisiones-automatizadas\/"},"modified":"2025-11-27T21:08:46","modified_gmt":"2025-11-28T00:08:46","slug":"xia-la-inteligencia-artificial-explicable-que-revoluciona-la-transparencia-en-decisiones-automatizadas","status":"publish","type":"post","link":"https:\/\/blog.n5now.com\/en\/xia-la-inteligencia-artificial-explicable-que-revoluciona-la-transparencia-en-decisiones-automatizadas\/","title":{"rendered":"XAI: The Explainable Artificial Intelligence Transforming Transparency in Automated Decisions"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\">Artificial intelligence is like a machine that learns to make decisions by looking at many examples.<\/h2>\n\n\n\n<p>These examples are, much like for a human, experiences\u2014experiences evaluated through statistical criteria. The more often a thought, reaction, behavior, or piece of information appears across cases, the more strongly it will shape the AI\u2019s response patterns.<\/p>\n\n\n\n<p>Let\u2019s suppose the AI is shown thousands of cases of people who applied for a loan.<\/p>\n\n\n\n<p>The AI learns patterns. And then it can decide whether a credit application is \u201capproved\u201d or \u201cnot approved.\u201d<\/p>\n\n\n\n<p>However, in this process of gathering information and reasoning toward a decision, the inner workings are not clear. This is what has been called the \u201cblack box\u201d of AI.<\/p>\n\n\n\n<p>Many modern AIs (especially the powerful ones) operate as black boxes: they provide an answer but do not reveal why or how they reached it.<\/p>\n\n\n\n<p>How would you react if you went to a doctor, received a prescription, and upon asking what it was for, the doctor said: \u201cWell, that doesn\u2019t matter. What matters is that you take it. Don\u2019t you trust me?\u201d<\/p>\n\n\n\n<p>AI operates in highly sensitive areas of our reality. This habit multiplies risks and could lead to extremely serious problems. Banking, healthcare, and justice systems use AI constantly.<\/p>\n\n\n\n<p>What solutions have been proposed recently to address this potential source of trouble? This is where XAI, or Explainable Artificial Intelligence, comes in.<\/p>\n\n\n\n<p>The goal is to understand how the AI processes and interprets the information it is trained on.<\/p>\n\n\n\n<p>To achieve this, a set of techniques has been developed to make transparent the opaque information that makes up the \u201cblack box.\u201d<\/p>\n\n\n\n<p>In Europe, for example, the EU AI Act already requires transparency\/explainability for \u201chigh-risk\u201d systems (such as credit or scoring). This makes XAI almost mandatory in many industries.<\/p>\n\n\n\n<p>The purpose is to:<\/p>\n\n\n\n<p>build greater trust in the tool and its outputs; detect bias; and identify and correct errors when needed.<\/p>\n\n\n\n<p>Let\u2019s consider an example. Imagine a credit decision engine whose primary function is to approve or deny loans, recently adopted by a bank that wants to speed up its processes.<\/p>\n\n\n\n<p>The model does not \u201cthink\u201d like a human: it calculates probabilities based on patterns learned from thousands of previous loans.<\/p>\n\n\n\n<p>It must evaluate and decide whether to \u201capprove\u201d or \u201cdeny\u201d the credit request. If it provides only the final result, it behaves like a black box.<\/p>\n\n\n\n<p>Due to regulation and the need for trust, the bank will instead require a clear explanation. That is where Explainable AI (XAI) enters.<\/p>\n\n\n\n<p>Let\u2019s take the case of Ana R., 29 years old, recorded by the system with a monthly income of $1,200,000. She has six months of job tenure and a current debt of 800,000 pesos. Her payment history shows two late payments in the past year. The system also calculates that the potential loan installment would represent 45% of her salary.<\/p>\n\n\n\n<p>The verdict: \u201cNot approved.\u201d<\/p>\n\n\n\n<p>Both the financial institution and the applicant\u2014in this case, Ana R.\u2014will naturally ask: \u201cWhy not?\u201d<\/p>\n\n\n\n<p>And this is where XAI provides transparency and validates its reasoning. It does so through three layers:<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>1) \u201cWhat mattered most\u201d explanation<\/strong><\/h3>\n\n\n\n<p>XAI uses techniques such as SHAP or LIME to indicate which variables pushed the decision toward \u201cnot approved.\u201d<\/p>\n\n\n\n<p>The system might return something like:<\/p>\n\n\n\n<ul>\n<li>Payment history with late payments \u2192 strong negative impact<\/li>\n\n\n\n<li>Installment-to-income ratio 45% \u2192 high risk<\/li>\n\n\n\n<li>Job tenure: 6 months \u2192 low stability<\/li>\n\n\n\n<li>Income $1,200,000 \u2192 neutral \/ slightly negative<\/li>\n\n\n\n<li>Age 29 \u2192 almost no influence<\/li>\n<\/ul>\n\n\n\n<p>This is extremely useful for analysts because it turns the \u201cno\u201d into something understandable. SHAP is widely used for exactly this reason in credit scoring.<\/p>\n\n\n\n<p>How should the human agent explain this to the customer?<\/p>\n\n\n\n<p>\u201cWe rejected your application mainly because of recent late payments and because the installment would be too heavy compared to your income.\u201d<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>2) Counterfactual explanation: \u201cWhat would need to change for approval\u201d<\/strong><\/h3>\n\n\n\n<p>This is the clearest and easiest part for a customer to understand.<\/p>\n\n\n\n<p>The system might answer:<\/p>\n\n\n\n<p>\u201cWith your current situation, the loan is not approved.<\/p>\n\n\n\n<p>If your installment-to-income ratio dropped to 30%<\/p>\n\n\n\n<p>or if you had 0 late payments within 12 months, the decision would change to \u2018approved.\u2019\u201d<\/p>\n\n\n\n<p>This explanation gives hope that improving these conditions may lead to approval in the future. It may also suggest that applying for a smaller amount could result in immediate approval.<\/p>\n\n\n\n<p>In other words:<br>\u201cAna, if you request a slightly lower amount (or more installments) so that the payment burden decreases, we would approve it.\u201d<br>Or: \u201cIf you go a full year without late payments, we would approve it.\u201d<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>3) Audit-ready explanation: \u201csimple rules\u201d<\/strong><\/h3>\n\n\n\n<p>Sometimes the bank needs rules it can show to regulators or internal audit teams.<\/p>\n\n\n\n<p>XAI might summarize it like this:<\/p>\n\n\n\n<p>\u201cRejected due to: recent late payments &gt; 1; and installment\/income ratio &gt; 40%.\u201d<\/p>\n\n\n\n<p>This supports model risk management, a practice that banking regulators have increasingly reinforced.<\/p>\n\n\n\n<p>This system evolved from an innovation to an obligation in a very short time. The reason is that credit systems are classified as high-risk AI in Europe, and the law requires: transparency, clear and understandable explanations,<\/p>\n\n\n\n<p>traceability, and continuous monitoring.<\/p>\n\n\n\n<p>This means that it isn\u2019t enough for AI to be accurate. It must justify its decision transparently.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>In summary<\/strong><\/h2>\n\n\n\n<p>Without XAI, the conversation would be:<\/p>\n\n\n\n<p>Customer: \u201cWhy was I rejected?\u201d<br>Bank: \u201cWe don\u2019t know, the model said so.\u201d<\/p>\n\n\n\n<p>With XAI, the bank instead says:<\/p>\n\n\n\n<ul>\n<li>\u201cWe rejected your application for three main reasons.\u201d<\/li>\n\n\n\n<li>\u201cIf you change X or Y, we will approve it.\u201d<\/li>\n\n\n\n<li>\u201cOur decision complies with auditable rules.\u201d<\/li>\n<\/ul>\n\n\n\n<p><strong>And the results for the financial institution are:<\/strong><\/p>\n\n\n\n<ul>\n<li>Greater trust<\/li>\n\n\n\n<li>Lower legal risk<\/li>\n\n\n\n<li>Better customer experience<\/li>\n\n\n\n<li>More opportunities to improve each process<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Learn what Explainable AI (XAI) is, how it works, and why it&#8217;s essential for transparency in credit scoring, healthcare, banking, and justice systems.<\/p>\n","protected":false},"author":36,"featured_media":15704,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_seopress_robots_primary_cat":"none","_seopress_titles_title":"","_seopress_titles_desc":"","_seopress_robots_index":"","footnotes":""},"categories":[217],"tags":[],"_links":{"self":[{"href":"https:\/\/blog.n5now.com\/en\/wp-json\/wp\/v2\/posts\/15707"}],"collection":[{"href":"https:\/\/blog.n5now.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.n5now.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.n5now.com\/en\/wp-json\/wp\/v2\/users\/36"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.n5now.com\/en\/wp-json\/wp\/v2\/comments?post=15707"}],"version-history":[{"count":2,"href":"https:\/\/blog.n5now.com\/en\/wp-json\/wp\/v2\/posts\/15707\/revisions"}],"predecessor-version":[{"id":15713,"href":"https:\/\/blog.n5now.com\/en\/wp-json\/wp\/v2\/posts\/15707\/revisions\/15713"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blog.n5now.com\/en\/wp-json\/wp\/v2\/media\/15704"}],"wp:attachment":[{"href":"https:\/\/blog.n5now.com\/en\/wp-json\/wp\/v2\/media?parent=15707"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.n5now.com\/en\/wp-json\/wp\/v2\/categories?post=15707"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.n5now.com\/en\/wp-json\/wp\/v2\/tags?post=15707"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}