What Makes Explainable AI So Difficult

Explainable AI refers to methods and techniques in the application of artificial intelligence such that the results of the solution can be understood by human experts. It contrasts with the concept of the “black box” in machine learning and enables transparency.

via Alejandro Barredo Arrieta et al.,

The need for transparency could be seen in the increased interest of the researchers. In the chart above, one can see how the number of publications with XAI as keyword have seen a rise over the past five years.

Before we go further, there are two things that we should get out of our way: why XAI is different and what is the biggest outcome.

Interpretability vs Explainability

A model is said to be interpretable when the outcomes make sense to the user. Whereas, explainability of a model deals with the nitty-gritties of it. If the former deals with the ‘what’ part, then the latter deals with the ‘why’.

Fairness And Transparency

Another important objective of explainable AI is exposing the biases involved with the data and XAI methods are forecasted to be the road that leads to a fairer machine learning practice.

So, how can one implement XAI?

There are a few fundamental ways in which this can be done:

  • Text explanation
  • Visual explanation
  • Explanations by example

Out of these three, the visual explanation is more popular. However, there still is a dearth in the implementation of XAI because not every algorithm is designed with explainability and transparency of results in mind. The objective often is to decrease loss, improve accuracy and be consistent with scale.

What Makes It Difficult

Though having explainability as a criterion sounds good, there are few hurdles that developers and practitioners have to deal with.

Performance tradeoff: 

The first step to make things more explainable is to make the models simpler. This enables the segmentation of the processes. However, complex models are more flexible and can be used to scale in real-time. For example, recommendation engines cannot expect to have fixed constraints. So, as to maintain consistency in the performance of the model at higher dimensionalities, the complexity has to be imbibed.

Establishing the metrics:

Now let’s say that we have somehow figured out the tools to make models more explainable. But who gets to say that some model’s action are explainable? What are the metrics that one needs to stick for overall acceptance? A metric that works for a developer might not work for a GDPR compliance manager. Though domain knowledge would clear the initial hiccups, the question of who gets to pick, still looms large.

Why Data Fusion Is A Big Deal

Data fusion techniques were initially developed to exploit the overlap in data from various sources for faster learning of a task. Data fusion techniques merge heterogeneous information to improve the performance of ML models. Japanese researchers insist that there is still a lack of active research between explainable AI and data fusion techniques.

However, they speculate on a few approaches that might eventually lead to beneficial outcomes. For instance, in big data fusion, local models submit their split of data sources to a worker node. Later, this information is processed via the popular Reduce and Map steps. In other words, the complexity in the information is split, reduced and later mapped; a fine example of information fusion.

Though data fusion and explainability have been kept apart, the advent of deep learning methods bridged the void between these two concepts. The way features are learned from the initial layers of a deep neural network is synonymous with many data fusion techniques and since explainability deals with decoding high dimensional data, this becomes an interesting pursuit.

Bringing up data fusion as a solution to the fore also sparks up discussion on privacy. For this end, federated learning has shown promising results. In federated learning, models learn by sharing the information locally amongst the nodes and ensuring leak-proof modelling.

What Can Be Done?

An immediate solution is the one discussed above: Explainability through visualisation. Tensorflow has a what-if tool and then there is activation atlases of Google again. These are the easy routes for both outsiders and practitioners alike. Along with this, there are a bunch of other metrics that enable explainability.

These are solutions for those who have the ML pipeline in place. However, for firms that are starting new, the intent has to be there since the initial stages. According to the study, companies should balance cultural and organisational changes needed to enforce such responsibilities over processes with AI functionalities; along with the feasibility and compliance of the implementation of such principles with the IT assets, policies and resources already available at the company. 

They believe that it is in the gradual process of rising corporate awareness around the principles and values of Responsible AI where XAI will make its place and create a huge impact.

Download our Mobile App

Ram Sagar
I have a master's degree in Robotics and I write about machine learning advancements.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Recent Stories

Our Upcoming Events

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR