There are certain times when researchers and neural net builders fail to explain what’s going on behind the black box. For instance, deep neural networks, large ensemble models, among others have been achieving significant accuracies but these algorithms lack the explanation behind the decisions.
Who doesn’t like to get a clear explanation of a complex concept? To get more meaningful insights from the models, researchers have been trying to figure out and explain how the inputs work in a model and executes the outputs for a few years now.
Fig: AI Explainability 360 Toolkit
Recently, IBM researchers open-sourced AI Explainability 360 which is a collection of state-of-the-art machine learning algorithms that can help out the developers to gain more explainable insights on machine learning models and their predictions.
AI Explainability 360
Fig: AI Explainability 360 Usage Diagram
AI Explainability is an open source toolkit of state-of-the-art algorithms that support the interpretability and explainability of machine learning models. This toolkit includes five different types of classes of algorithms and choosing the class solely depends upon the persona of the consumer of the explanation. They are mentioned below
- Data Explanation: This class is to understand the data better
- Global Direct Explanation: In this class, the model itself is understandable.
- Local Direct Explanation: This class means that the individual prediction is meaningful
- Global Post hoc Explanation: In this class, an understandable model explains the black-box model
- Local Post hoc Explanation: In this class, an explanation is created for individual prediction.
At present, there are a total of eight algorithms which are included in the above-mentioned classes:
- Boolean Decision Rules via Column Generation: This algorithm provides access to classes which implements a directly interpretable supervised learning method for binary classification that learns a Boolean rule in disjunctive normal form (DNF) or conjunctive normal form (CNF) using column generation (CG). For classification problems, Boolean Decision Rules tends to return simple models that can be quickly understood.
- Generalised Linear Rule Models: Generalised Linear Rule Models are applicable for both classification and regression problems. For classification problems, Generalised Linear Rule Models can achieve higher accuracy while retaining the interpretability of a linear model.
- ProfWeight: This algorithm can be applied to the neural networks in order to produce instance weights that can be further applied to the training data to learn an interpretable model.
- Teaching AI to Explain Its Decisions: This algorithm is an explainability framework that leverages domain-relevant explanations in the training dataset to predict both labels and explanations for new instances.
- Contrastive Explanations Method: This algorithm is the basic version for classification with numerical features can be used to compute contrastive explanations for image and tabular data.
- Contrastive Explanations Method with Monotonic Attribute Functions: This algorithm is a Contrastive Image explainer which leverages Monotonic Attribute Functions. The main idea behind this algorithm is to explain images using high level semantically meaningful attributes that may either be directly available or learned through supervised or unsupervised methods
- Disentangled Inferred Prior Variational Auto-Encoder (DIP-VAE): This algorithm is an unsupervised representation learning algorithm which usually takes a given feature and learns a new representation in a disentangled manner in order to make the resulting features more understandable.
- ProtoDash: This algorithm is a way of understanding a dataset with the help of prototypes. It provides exemplar-based explanations for summarising dataset as well as explaining predictions made by an AI model. It employs a fast gradient-based algorithm to find prototypes along with their (non-negative) importance weights.
In our previous articles, we discussed how explainable AI has become one of the prime requirements in this AI-driven era and how this explainable AI is driving the markets and opportunities in organisations. Earlier in March this year, Microsoft open-sourced software toolkit, InterpretML which is aimed at solving AI’s “black box” problem. The main motive behind the explainable model is that the developers will be able to compare and contrast the explanations produced by different methods and to select methods that best suit their needs.
If you loved this story, do join our Telegram Community.
Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.
A Technical Journalist who loves writing about Machine Learning and Artificial Intelligence. A lover of music, writing and learning something out of the box. Contact: firstname.lastname@example.org