Latest Insights
Blog

Banks and Credit Unions: Understanding the Importance of Explainable AI 

While AI was once the sandbox for data science gurus and mathematicians with PhDs, new tools and technologies are bringing Enterprise AI closer to business users. At the same time, financial services regulatory bodies are becoming keenly aware of the risks posed by unchecked AI.

Most consumer fairness laws, like the Equal Credit Opportunity Act (ECOA) and the Fair Credit Reporting Act (FCRA), do not explicitly address AI, but they are increasingly being interpreted to apply to AI-driven decision-making processes. In 2023, the Federal Reserve emphasized that employing AI technologies does not exempt banks and credit unions from complying with existing laws and regulations.

Moving forward, the onus is on banks and credit unions to ensure AI systems are auditable to avoid legal violations involving discrimination or unfair lending practices. Failure to do so could result in regulatory penalties, reputational damage, and loss of consumer trust. While the stakes are high, the opportunities are equally compelling. To balance AI’s risks and reward, proactive banks and credit unions will develop their AI strategies with an eye to Explainable AI (XAI).

Timeline of the Journey to Explainable AI. Content includes: AI’s evolution began with expert systems in the 1980s, which automated decision-making using opaque, rule-based models. Milestones like IBM's Deep Blue in 1997 and Watson in 2011 showcased AI’s growing complexity and capabilities, while highlighting the need for transparency and interpretability, setting the stage for modern Explainable AI (XAI) efforts aimed at making complex AI models understandable and accountable​. 2016 - GDPR and the 'Right to Explanation' The EU’s GDPR introduces the "right to explanation," signaling that AI-driven decisions impacting individuals need transparency, although black-box models remain widespread in business​. 2018 - Introduction of SHAP and LIME Tools emerge to improve transparency in AI decision-making, but many businesses continue deploying opaque models due to their predictive power, especially in finance and healthcare. 2020 - Generative Models OpenAI's GPT-3 exemplifies the power of black-box AI in generating realistic content. Businesses increasingly adopt generative models for applications like customer support, sparking concerns over lack of transparency​. 2021 - EU AI Act Proposal The EU proposes the AI Act, pushing for transparency and interpretability in AI applications, particularly high-risk business areas, increasing pressure to adopt XAI tools​. 2023 - Federal Reserve’s Statement on AI Accountability The Federal Reserve clarifies that AI in financial services must comply with existing laws, highlighting the need for transparent, accountable AI as regulatory demands evolve.

What is Explainable AI?

Explainable AI refers to data management practices that make AI decision-making processes understandable to humans, with a focus on transparency, interoperability and explainability.

Traditionally, AI systems operated like black boxes, offering little insight into how they arrived at specific decisions or predictions. In the black box format, AI models make it difficult to interpret how specific inputs lead to outputs. Intricate computations create decision-making processes that are nearly impossible to decipher. When models are optimized for accuracy over explainability, they prioritize performance while sacrificing transparency.

XAI relies on data management practices and tooling to produce methods and models that prioritize interpretability. For banks and credit unions, adopting XAI is becoming essential with AI integrations into operations like loan approvals, credit scoring, fraud detection, and risk assessment. By providing clear insights into how AI models arrive at specific decisions, XAI helps financial institutions ensure algorithms operate fairly, ethically, and in compliance with regulatory standards.

Implementing explainable AI also enhances operational efficiency and risk management. By understanding the inner workings of AI models, banks and credit unions can optimize processes for better strategic decisions.

As AI becomes more accessible and integral to financial services, embracing XAI ensures that banks and credit unions can leverage the benefits of advanced analytics while upholding the trust that is fundamental to their institutions.

Promoting XAI in Customer Experience Applications

For most banks and credit unions, the first place to look for XAI will be where it might already be implemented. AI-powered chatbots, for example, have gained popularity for their ability to provide 24/7 customer support for a fraction of the price of live customer service. A 2023 report from the Consumer Finance Bureau estimated that 37% of the U.S. population – or nearly 100 million users – had interacted with a bank’s chatbot in 2022. That number should exceed to nearly 111 million users by 2026, according to the report. The same report questions the efficacy of chatbots for complex problems, leaving customers feeling frustrated and unable to find accurate information for their problems.

Chatbots that rely on large language models (LLMs) often struggle to distinguish between factual and incorrect data, which can result in the dissemination of misinformation. This issue is especially concerning when it comes to financial advice, as mistakes could lead to serious consequences for users. The CFPB report showed that LLM-based chatbots, such as ChatGPT, LaMDA, and Bard, often generate incorrect or biased outputs, leading to unreliable customer service experiences. Account holders seeking financial advice may find themselves trapped in “doom loops,” where chatbots provide repetitive, unhelpful responses, without escalation to a human representative. This reliance on automation, while intended to improve efficiency, can leave customers frustrated and without meaningful assistance, particularly when their issues fall outside the chatbot’s capabilities.

Luckily, technological advancements have found a way to introduce XAI into these high-stakes applications.

Open-source tools like SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations) are powerful methods that help banks and other organizations understand how their AI models make decisions. These tools are designed to break down black-box AI models into understandable insights.

SHAP (SHapley Additive exPlanations)

SHAP explains predictions by calculating the contribution of each feature (such as a customer’s credit score or transaction history) to the AI model’s final decision. It’s based on game theory principles and assigns an “importance” value to each factor involved in the decision-making process. This is particularly useful when banks need to explain why a loan was approved or denied, or how a risk score was calculated.

LIME (Local Interpretable Model-agnostic Explanations):

LIME provides explanations for individual predictions by creating simpler models around the original AI model’s decision for a specific instance (like a customer inquiry). It builds these local models to show how slight changes in data inputs (e.g., income or employment history) could impact the prediction, giving users an idea of which factors are most influential.

SHAP and LIME toolkits are applied to the output of the data model, using the model’s existing data to analyze and interpret the influence of individual features on predictions. For that reason, the tooling can be used with almost any existing model, and are a great place to start to build XAI into your current AI strategy.

XAI Implementation Flowchart includes the following steps: 1) DATA QUALITY FOUNDATIONS Ensure Data Integrity Implement Data Cleaning and Validation Monitor Data Consistency 2) TRANSPARENCY MEASURES Set Clear Transparency Rules Create Data Lineage Records Educate Stakeholders IMPLEMENT EXPLAINABLE TOOLS Integrate SHAP and LIME Conduct Regular Model Audits Test on Specific Cases CONTINUOUS IMPROVEMENT Monitor for Model Drift Refine XAI Strategies Over Time Engage in Ongoing Training

Building an Architecture that Supports Explainability

While tools like SHAP and LIME can help optimize and fine tune AI applications, organizations can proactively set the stage for XAI with a data architecture that emphasizes foundational data quality and auditability while establishing guardrails for AI applications. This isn’t a big shift from data governance best practices that mandate data stewardship throughout the data lifecycle. Enterprise AI only raises the stakes – as data stores continue to grow, AI can leverage, augment – and distort – data at an exponential rate.

Data Quality: Create Foundational Data Trust

AI models will only be as good as the data they consume. Incomplete, inaccurate, or inconsistent data can lead to incorrect decisions – at best, you will never truly see the value of the AI initiatives if they are not informed by well-governed data, at worst, you can run afoul of consumer fairness regulations.

To support XAI, the goal of a data quality program is not just about avoiding errors; it’s about making sure every decision made by the AI is based on reliable, traceable information that can be clearly explained. Ensuring data quality requires the ability to clean and validate data along with the ability to monitor data overtime, including data integrity, data consistency, data completeness, data timeliness and data popularity. Automated tools like Data Quality Watch can simplify this process, and once data quality metrics are established, organizations can move toward improving data maturity.

Data Observability: Measure Data Quality at Scale

Data observability refers to the ability to monitor, understand, and measure the health of data across the entire pipeline, from data ingestion to AI model deployment. Observability tools provide real-time insights into data quality, helping banks ensure that data used by AI models is complete, accurate, and up to date.

Banks and credit unions using AI to make decisions must be able to measure data quality at scale. Without observability, data quality issues can go undetected, leading to poor model performance, biased predictions, and difficult-to-explain decisions.

  • Real-Time Data Quality Monitoring: Banks need tools that can continuously monitor data as it flows through their systems. For example, if an AI model is analyzing customer transactions to detect fraud, data observability tools can monitor the completeness and accuracy of those transaction records in real-time. If an anomaly is detected, such as missing transaction timestamps, the system can flag these issues before they affect the AI model’s predictions.
  • Data Drift Detection: Over time, data used to train AI models can change, otherwise known as data drift. For example, customer spending behaviors may shift during economic downturns or periods of inflation. Data observability helps detect these changes, ensuring AI models are updated with the most current data and can continue providing accurate, explainable predictions. If a model’s output starts to deviate from expected patterns, observability tools can alert data teams to investigate and adjust the model accordingly.
  • Automated Quality Checks: Observability tools can be configured to automatically check for data consistency, completeness, and accuracy at various stages of the data pipeline. For example, if a customer’s credit score is being calculated from multiple data sources, automated checks can ensure that the information from each source is consistent and up to date before it reaches the AI model.

Banks and credit unions using AI to make decisions must be able to measure data quality at scale. Without observability, data quality issues can go undetected, leading to poor model performance, biased predictions, and difficult-to-explain decisions.

Data Transformation and Traceability: Build a Transparent Pipeline

Data rarely exists in a perfect, ready-to-use format. Before it can be fed into AI models, it must undergo data transformation, where it is cleaned, aggregated, and made usable. However, every transformation introduces the risk of altering the data in a way that affects the model’s decisions.

Data lineage traceability – the ability to trace data from its source, through each transformation, to its final use in AI models – is critical to maintaining accountability in the AI decision-making process.

Getting Started with Explainable AI

Like any advanced data application, a healthy AI program starts with fundamental data management and data governance. Developing an XAI strategy can go a long way in allaying concerns around regulatory reporting and can mitigate fears of bias or model hallucination. While AI Readiness doesn’t happen overnight, it can happen incrementally. If you’re ready to get started, contact us today.

Return to Blog