Our solutions verify the value and opportunities in data modernization, with industry-leading technologies, low investment overhead and proven support.
Data Rocket® helps organizations turn data to action by modernizing infrastructure and delivering critical business insights - securely and accessibly. Data Rocket puts industry-best data technology in the hands of businesses of any size, unlocking dynamic ingestion, data mastering, 3rd party data, and ML and AI applications.
Passerelle partners with technology providers that demonstrate an ability to deliver positive outcomes to our clients - efficiently and at scale. Our Keystone Partners provide the foundational technology for our product offerings. Featured Partners are best in class data solutions for visualization, enrichment, data science and security. Interested in joining our partner ecosystem? Contact us today.
Ready to start your digital transformation? Learn how technology leaders in banking, higher education, manufacturing, healthcare and energy are turning Data to Action with the help of Passerelle engineering and solutions.
We support the goals and objectives of companies that want to harness data to create better customer experiences, drive revenue growth, and respond to market demands.
While AI was once the sandbox for data science gurus and mathematicians with PhDs, new tools and technologies are bringing Enterprise AI closer to business users. At the same time, financial services regulatory bodies are becoming keenly aware of the risks posed by unchecked AI.
Most consumer fairness laws, like the Equal Credit Opportunity Act (ECOA) and the Fair Credit Reporting Act (FCRA), do not explicitly address AI, but they are increasingly being interpreted to apply to AI-driven decision-making processes. In 2023, the Federal Reserve emphasized that employing AI technologies does not exempt banks and credit unions from complying with existing laws and regulations.
Moving forward, the onus is on banks and credit unions to ensure AI systems are auditable to avoid legal violations involving discrimination or unfair lending practices. Failure to do so could result in regulatory penalties, reputational damage, and loss of consumer trust. While the stakes are high, the opportunities are equally compelling. To balance AI’s risks and reward, proactive banks and credit unions will develop their AI strategies with an eye to Explainable AI (XAI).
Explainable AI refers to data management practices that make AI decision-making processes understandable to humans, with a focus on transparency, interoperability and explainability.
Traditionally, AI systems operated like black boxes, offering little insight into how they arrived at specific decisions or predictions. In the black box format, AI models make it difficult to interpret how specific inputs lead to outputs. Intricate computations create decision-making processes that are nearly impossible to decipher. When models are optimized for accuracy over explainability, they prioritize performance while sacrificing transparency.
XAI relies on data management practices and tooling to produce methods and models that prioritize interpretability. For banks and credit unions, adopting XAI is becoming essential with AI integrations into operations like loan approvals, credit scoring, fraud detection, and risk assessment. By providing clear insights into how AI models arrive at specific decisions, XAI helps financial institutions ensure algorithms operate fairly, ethically, and in compliance with regulatory standards.
Implementing explainable AI also enhances operational efficiency and risk management. By understanding the inner workings of AI models, banks and credit unions can optimize processes for better strategic decisions.
As AI becomes more accessible and integral to financial services, embracing XAI ensures that banks and credit unions can leverage the benefits of advanced analytics while upholding the trust that is fundamental to their institutions.
For most banks and credit unions, the first place to look for XAI will be where it might already be implemented. AI-powered chatbots, for example, have gained popularity for their ability to provide 24/7 customer support for a fraction of the price of live customer service. A 2023 report from the Consumer Finance Bureau estimated that 37% of the U.S. population – or nearly 100 million users – had interacted with a bank’s chatbot in 2022. That number should exceed to nearly 111 million users by 2026, according to the report. The same report questions the efficacy of chatbots for complex problems, leaving customers feeling frustrated and unable to find accurate information for their problems.
Chatbots that rely on large language models (LLMs) often struggle to distinguish between factual and incorrect data, which can result in the dissemination of misinformation. This issue is especially concerning when it comes to financial advice, as mistakes could lead to serious consequences for users. The CFPB report showed that LLM-based chatbots, such as ChatGPT, LaMDA, and Bard, often generate incorrect or biased outputs, leading to unreliable customer service experiences. Account holders seeking financial advice may find themselves trapped in “doom loops,” where chatbots provide repetitive, unhelpful responses, without escalation to a human representative. This reliance on automation, while intended to improve efficiency, can leave customers frustrated and without meaningful assistance, particularly when their issues fall outside the chatbot’s capabilities.
Luckily, technological advancements have found a way to introduce XAI into these high-stakes applications.
Open-source tools like SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations) are powerful methods that help banks and other organizations understand how their AI models make decisions. These tools are designed to break down black-box AI models into understandable insights.
SHAP explains predictions by calculating the contribution of each feature (such as a customer’s credit score or transaction history) to the AI model’s final decision. It’s based on game theory principles and assigns an “importance” value to each factor involved in the decision-making process. This is particularly useful when banks need to explain why a loan was approved or denied, or how a risk score was calculated.
LIME provides explanations for individual predictions by creating simpler models around the original AI model’s decision for a specific instance (like a customer inquiry). It builds these local models to show how slight changes in data inputs (e.g., income or employment history) could impact the prediction, giving users an idea of which factors are most influential.
SHAP and LIME toolkits are applied to the output of the data model, using the model’s existing data to analyze and interpret the influence of individual features on predictions. For that reason, the tooling can be used with almost any existing model, and are a great place to start to build XAI into your current AI strategy.
While tools like SHAP and LIME can help optimize and fine tune AI applications, organizations can proactively set the stage for XAI with a data architecture that emphasizes foundational data quality and auditability while establishing guardrails for AI applications. This isn’t a big shift from data governance best practices that mandate data stewardship throughout the data lifecycle. Enterprise AI only raises the stakes – as data stores continue to grow, AI can leverage, augment – and distort – data at an exponential rate.
AI models will only be as good as the data they consume. Incomplete, inaccurate, or inconsistent data can lead to incorrect decisions – at best, you will never truly see the value of the AI initiatives if they are not informed by well-governed data, at worst, you can run afoul of consumer fairness regulations.
To support XAI, the goal of a data quality program is not just about avoiding errors; it’s about making sure every decision made by the AI is based on reliable, traceable information that can be clearly explained. Ensuring data quality requires the ability to clean and validate data along with the ability to monitor data overtime, including data integrity, data consistency, data completeness, data timeliness and data popularity. Automated tools like Data Quality Watch can simplify this process, and once data quality metrics are established, organizations can move toward improving data maturity.
Data observability refers to the ability to monitor, understand, and measure the health of data across the entire pipeline, from data ingestion to AI model deployment. Observability tools provide real-time insights into data quality, helping banks ensure that data used by AI models is complete, accurate, and up to date.
Banks and credit unions using AI to make decisions must be able to measure data quality at scale. Without observability, data quality issues can go undetected, leading to poor model performance, biased predictions, and difficult-to-explain decisions.
Data rarely exists in a perfect, ready-to-use format. Before it can be fed into AI models, it must undergo data transformation, where it is cleaned, aggregated, and made usable. However, every transformation introduces the risk of altering the data in a way that affects the model’s decisions.
Data lineage traceability – the ability to trace data from its source, through each transformation, to its final use in AI models – is critical to maintaining accountability in the AI decision-making process.
Like any advanced data application, a healthy AI program starts with fundamental data management and data governance. Developing an XAI strategy can go a long way in allaying concerns around regulatory reporting and can mitigate fears of bias or model hallucination. While AI Readiness doesn’t happen overnight, it can happen incrementally. If you’re ready to get started, contact us today.