Will IBM’s Latest AI Toolkit Set The Path Towards Explainable AI?

There are certain times when researchers and neural net builders fail to explain what’s going on behind the black box. For instance, deep neural networks, large ensemble models, among others have been achieving significant accuracies but these algorithms lack the explanation behind the decisions. 

Who doesn’t like to get a clear explanation of a complex concept? To get more meaningful insights from the models, researchers have been trying to figure out and explain how the inputs work in a model and executes the outputs for a few years now.

Subscribe to our Newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Fig: AI Explainability 360 Toolkit 




Recently, IBM researchers open-sourced AI Explainability 360 which is a collection of state-of-the-art machine learning algorithms that can help out the developers to gain more explainable insights on machine learning models and their predictions. 

AI Explainability 360

Fig: AI Explainability 360 Usage Diagram 

AI Explainability is an open source toolkit of state-of-the-art algorithms that support the interpretability and explainability of machine learning models. This toolkit includes five different types of classes of algorithms and choosing the class solely depends upon the persona of the consumer of the explanation. They are mentioned below

  • Data Explanation: This class is to understand the data better
  • Global Direct Explanation: In this class, the model itself is understandable.
  • Local Direct Explanation: This class means that the individual prediction is meaningful
  • Global Post hoc Explanation: In this class, an understandable model explains the black-box model
  • Local Post hoc Explanation: In this class, an explanation is created for individual prediction.

At present, there are a total of eight algorithms which are included in the above-mentioned classes: 

  1. Boolean Decision Rules via Column Generation: This algorithm provides access to classes which implements a directly interpretable supervised learning method for binary classification that learns a Boolean rule in disjunctive normal form (DNF) or conjunctive normal form (CNF) using column generation (CG). For classification problems, Boolean Decision Rules tends to return simple models that can be quickly understood.
  2. Generalised Linear Rule Models: Generalised Linear Rule Models are applicable for both classification and regression problems. For classification problems, Generalised Linear Rule Models can achieve higher accuracy while retaining the interpretability of a linear model.  
  3. ProfWeight: This algorithm can be applied to the neural networks in order to produce instance weights that can be further applied to the training data to learn an interpretable model.
  4. Teaching AI to Explain Its Decisions: This algorithm is an explainability framework that leverages domain-relevant explanations in the training dataset to predict both labels and explanations for new instances.   
  5. Contrastive Explanations Method: This algorithm is the basic version for classification with numerical features can be used to compute contrastive explanations for image and tabular data.
  6. Contrastive Explanations Method with Monotonic Attribute Functions: This algorithm is a Contrastive Image explainer which leverages Monotonic Attribute Functions. The main idea behind this algorithm is to explain images using high level semantically meaningful attributes that may either be directly available or learned through supervised or unsupervised methods
  7. Disentangled Inferred Prior Variational Auto-Encoder (DIP-VAE): This algorithm is an unsupervised representation learning algorithm which usually takes a given feature and learns a new representation in a disentangled manner in order to make the resulting features more understandable.  
  8. ProtoDash: This algorithm is a way of understanding a dataset with the help of prototypes. It provides exemplar-based explanations for summarising dataset as well as explaining predictions made by an AI model. It employs a fast gradient-based algorithm to find prototypes along with their (non-negative) importance weights.

Outlook

In our previous articles, we discussed how explainable AI has become one of the prime requirements in this AI-driven era and how this explainable AI is driving the markets and opportunities in organisations. Earlier in March this year, Microsoft open-sourced software toolkit, InterpretML which is aimed at solving AI’s “black box” problem. The main motive behind the explainable model is that the developers will be able to compare and contrast the explanations produced by different methods and to select methods that best suit their needs.

Ambika Choudhury
A Technical Journalist who loves writing about Machine Learning and Artificial Intelligence. A lover of music, writing and learning something out of the box.

Download our Mobile App

MachineHack

AI Hackathons, Coding & Learning

Host Hackathons & Recruit Great Data Talent!

AIM Research

Pioneering advanced AI market research

Request Customised Insights & Surveys for the AI Industry

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Strengthen Critical AI Skills with Trusted Corporate AI Training

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

AIM Leaders Council

World’s Biggest Community Exclusively For Senior Executives In Data Science And Analytics.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR