Council Post: The Fault in AI Predictions: Why Explainability Trumps Predictions

Listen to this story

The last few years have seen tectonic shifts in fields of artificial intelligence and machine learning. There have also been plenty of examples where models failed and model predictions have created troubling outcomes, creating stumbling blocks to adopting AI/ ML—especially for mission-critical functions and in highly regulated industries. For example, research shows that even though algorithms predict the future more accurately than human forecasters, forecasters decide to use a human forecaster over a statistical algorithm. This phenomenon—which we call algorithm aversion—is costly and it is important to understand its causes. This gave rise to Explainable AI (XAI). 

What is XAI?

In machine learning, Explainability (XAI) refers to understanding and comprehending the model’s behaviour from input to output. It resolves the ‘black box’ issue by making models transparent. Explainability covers a larger scope of explaining technical aspects, demonstrating impact through a change in variables, how much weightage the inputs are given, and more. In addition, it is needed to provide the much-needed evidence backing the ML model’s predictions to make it trustworthy, responsible and auditable. 


Sign up for your weekly dose of what's up in emerging technology.

The main goal of Explainability is to understand the model, and it lays out how and why a model has given a prediction. There are two types of Explainability: 

  • Global Explainability, which focuses on the overall model behaviour, providing an overview of how various data points affect the prediction.
  • Local Explainability, which focuses on individual model predictions and how the model functioned for that prediction.

Download our Mobile App

How is XAI relevant to different stakeholders?

Fundamentally, any user of these models needs additional explanations to understand how the model worked to arrive at that prediction. The depth of such explanations varies with the criticality of the prediction, background, and influence of that user. For example, in loan underwriting use cases, users are typically Underwriters, Customers, Auditors, Regulators, Product Managers and Business Owners. Each of them needs different explanations of how the model worked but the depth of these explanations varies from an underwriter to a regulator. 

Most commonly used XAI techniques are understandable only to an AI expert. Therefore, a growing need for simplistic tools and frameworks for the rapid adoption of AI is similar to the need for a more straightforward framework for Explainability. 

Builders: DS/ ML teams

ML engineers and data scientists are the builders of automated predictive systems. They work with volumes of data to optimise the model decision-making. Hence, they need to monitor the model and understand the system’s behaviour to improve it, ensure consistency in model performance, flag performance outliers to uncover retraining opportunities and ensure that there is no underlying bias within the data. Explainable AI helps them answer the most crucial questions like: 

  • Is there bias in the data?
  • What has worked in the model and what hasn’t?
  • How can one improve the model performance?
  • How should one modify the model?
  • How can one be informed about model deviation in production? 

Maintenance: Production/Software Engineering teams

IT/ Engineering teams need to ensure that the AI system runs effectively, gain deep insights into its everyday operations, and troubleshoot any issues that arise. Using Explainable AI equips them to stay on top of crucial questions like:

  • Why has this issue occurred? What can be done to fix it?
  • How can one enhance operational efficiency?

Users: Experts/Decision makers/Customers

Users are the end consumers of the model predictions. Explainable AI helps them uncover if their goals are being met, how the model uses the data, or why the model made a particular prediction in a simple, interpretable format. For example, in underwriting, if a new case is classified as ‘high risk’, the underwriter will have to understand how and why the model arrived at a decision, the fairness of the decision, and if the decision complies with regulatory guidelines. Explainable AI helps such end users get insights on:

  • How did the model arrive at this decision?
  • How is the input data being used for decision-making?
  • Why does the case fall in this category? What can be done to change it?
  • Has the model acted fairly and ethically? 

Owners: Business/Process/Operations owners

Business or Process owners need to understand the model behaviour and analyse its impact on the overall business. They must look at multiple aspects such as refining strategy, enhancing customer experiences, and ensuring compliance. Explainable AI equips them with comprehensive model visibility to track bias, gain Explainability, increase customer satisfaction, and visualise the business impact of predictions along with the following:

  • How is the system arriving at this decision?
  • Are the desired goals being met?
  • What variables are considered and how?
  • What are the acceptable and unacceptable boundary limits on this transaction?
  • How can this AI decision be defended by a regulator or customer?

Risk managers: Audit/Regulators/Compliance

Regulators and Auditors need the trust and confidence that risks are under control. Explainable AI provides them with information on the model’s functions, fairness, possible biases and a clear view of failure scenarios while ensuring that the organisation is practising responsible and safe AI and meeting regulatory/compliance requirements. 

  • Is there an underlying bias in the model?
  • Is this prediction fair?
  • How can one trust the model outcome?
  • How can we ensure consistency in the model in production?
  • What are the influencing factors in decisions and learning?
  • How to manage the usage risk of AI? 

While Explainability has become a prerequisite, justifications for prediction accuracy are just as important. A prediction can be accurate but is it also correct? Hence, accuracy is not enough; evidence is required

Why is Explainability challenging to attain?

AI systems are inherently complex. Developing, studying, and testing systems for production is complex, and maintaining them in production is significantly more challenging—explaining them accurately in a way that is understandable and acceptable by all stakeholders poses a different challenge altogether! 

Explanations: Highly contextual, usually ‘lost in translation’

The explanations need to be understood not only by AI experts but all stakeholders. But, perhaps unsurprisingly, the complex nature of the systems is usually understandable exclusively to AI experts. Typically, Data science and ML teams can understand these explanations. But when relating these explanations in the business sense, they often need help in translation. 

Let’s take the current explainability approaches – almost all of them use feature importance as an explanation. But how does a user or an underwriter or doctor, or a risk manager understand this feature’s importance? How is it aligned with business expertise? For example, for a given prediction, an underwriter might think ‘Occupation’ is the top feature in a particular transaction to decide whether to approve or reject a loan. But the XAI method used by the data science team might not mention ‘Occupation’ among the top ten features. This affects the confidence in the model. 

Accuracy of explanations 

Is any XAI method enough to make an AI solution acceptable? The answer depends on the sensitivity of the use case and the user. While minimal XAI is enough for less sensitive use cases, as the cases become sensitive and high-risk, one can not simply use any ‘XAI’ method. 

For sensitive use cases, wrong explanations can create more harm than no explanation! 

Going back to the loan underwriting example—let’s say you used a traditional XAI method like LIME to figure out how your models have worked and used feature importance as output. Unfortunately, LIME produces different outputs for different perturbations. So, when there is an audit by the regulator or internally, the Explainability for a case may need to align or be consistent, creating trust challenges in the system and overall business. 

Humans are biased to trust the path of the ‘Nexus trail of evidence’

When interacting with the AI models, all stakeholders turn to the ‘Builders’ (Data Science/ML teams) to investigate the source or origin of the explanation. The stakeholders rely on the information that the builders share, with little to no access to the AI model. If there is a need to further analyse an explanation or the evidence to find the root cause of learning and validate the decision, developing a dynamic nexus trail is very complex. Humans do carry intrinsic baggage of learning methods. They tend to trust decision trees with branches aligned with their expectations, and its model learning is chaotic in hindsight but may concur with global learning. 

Diversity of metrics

While there are various tools to explain or interpret AI models, they only focus a fraction of a subset of what defines an accurate, sufficient explanation without capturing other dimensions. An effective, in-depth explanation will require combining various metrics like reviewing different types of opacity, analysing of various XAI approaches (since different approaches can generate different explanations), ensuring consistent user studies (can be inconsistent because of UI phrasing, visualisations, specific contexts, needs, and more), and developing standard metrics ultimately. 

Explainability risks

AI explainability comes with risks. As mentioned earlier, poor/incorrect explanations will hurt the organisation badly. Delinquent elements or competitors can exploit them, raising privacy risks, especially with mission-critical decisions. Organisations need to be prepared with practical measures to mitigate these risks.

While everyone focuses on model manufacturing, the right product teams have started emphasising the fundamentals of good AI Solutions. XAI is the 101 feature to achieve it. However, the vision of achieving trustworthy AI is incomplete without Explainability. The idea that Explainability will provide insights into understanding model behaviours is, however, currently only serving the needs of AI experts. To achieve truly explainable, scalable and trustworthy AI, Explainability should be incorporated in a way that works across different domains, objectives and stakeholders. 

Increased clarity on regulations has also made regulated industries start looking at XAI more seriously and re-evaluate the currently deployed models along with the risk of using them in production. As more users experiment and validate the XAI templates, we could soon see good templates for each use case. AutoML + AutoXAI can scale the adoption exponentially in such a scenario and still achieve responsible and trustworthy AI. 

This article is written by a member of the AIM Leaders Council. AIM Leaders Council is an invitation-only forum of senior executives in the Data Science and Analytics industry. To check if you are eligible for a membership, please fill out the form here

Support independent technology journalism

Get exclusive, premium content, ads-free experience & more

Rs. 299/month

Subscribe now for a 7-day free trial

More Great AIM Stories

Vinay Kumar Sankarapu
Youngest member in 'AI' task force setup by Commerce and Industry Ministry of India to propose and recommend the path/policies for 'AI' adoption in India, Forbes Asia 30 under 30 member in technology; Public speaker in industry and Tech conferences like GTC (Nvidia), TEDx, Re-work, Nasscom etc; Bachelors and masters from IIT Mumbai, Published Author of two novels, Received an excellence award for my research in my third year of college, Research on particle formation and predictive modeling in Laser Ablation. Founded, a Deep Learning startup in my fourth of college. Since then, I have been heading research and product in with key focus in building advanced Technology for enterprises. I believe, technology should be easy to use and be an enabler for building a great product. The deliverables and capability of the product depends on depth of the technology advancements. 'Deep learning' is one such technology that has huge potential and can solve many of the todays complex problems. We wanted to take the research level advancements of Deep Learning to the hands of every developers and enterprise by simplifying and automating the complex steps such that they can leverage it and build next generations products. Our key focus is in building tools and systems to simplify the complex tasks of building 'Deep Learning' systems. Interests: Deep learning, AI, Particle Physics.

AIM Upcoming Events

Early Bird Passes expire on 3rd Feb

Conference, in-person (Bangalore)
Rising 2023 | Women in Tech Conference
16-17th Mar, 2023

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox

All you need to know about Graph Embeddings

Embeddings can be the subgroups of a group, similarly, in graph theory embedding of a graph can be considered as a representation of a graph on a surface, where points of that surface are made up of vertices and arcs are made up of edges