MITB Banner

Top GitHub libraries for building explainable AI models

Tensorboard’s WhatIf is a screen to analyse the interactions between inference results and data inputs. I

The lack of explainability is a significant barrier to creating sustainable, responsible and trustworthy AI. GitHub is home to several libraries focused on explaining black-box models, auditing model data and creating transparent models. Below, we have listed the top GitHub libraries to tackle the black box problem of AI models.

iModels

imodels packs cutting-edge techniques for concise, transparent, and accurate predictive modelling. The Python library, created by researchers at UC Berkeley, provides a simple interface for fitting and using state-of-the-art interpretable models. Interpretable models are often difficult to use and implement. iModels fills this gap with a simple unified interface and implementation for many state-of-the-art interpretable modelling techniques. 

Read more about it here

Find the library here

Find the blog post

Captum 

Captum is a PyTorch model interpretability and understanding library developed by Facebook. It consists of state-of-the-art algorithms to assist researchers, and developers figure out features that contribute to a model’s output. Captum provides easily implementable interpretability algorithms that interact with PyTorch models. It consists of general-purpose implementations of integrated gradients, saliency maps, smooth grade, vargrad etc., for PyTorch models.

Find the library here

Read the paper here

Visit the project website

InterpretML 

InterpretML is an open-source package offering machine learning interpretability techniques to train interpretable glass box models and explain black-box systems. It further helps researchers understand the model’s global behaviour and the reasons behind individual predictions. 

Find the library here

Visit the project website

LIME 

LIME stands for Local Interpretable Model-agnostic Explanations for ML models. LIME is a technique to explain the predictions of any machine learning classifier and evaluate its usefulness in various trust-related tasks. The researchers claim LIME can explain any black-box classifier with two or more classes. 

Find the library here

Read the paper here

Visit the blog 

Alibi Explain

Alibi Explain is an open-source Python library for ML model inspection and interpretation. It was developed by researchers at Seldon Technologies Limited and the University of Cambridge. It provides high-quality implementations of different explanations for black-box, white-box, local and global methods for classification and regression. In addition, Alibi provides a set of algorithms or methods called explainers that provide insights into a model. 

Find the library here

Read the paper here

Visit the project website

Aequitas 

Aequitas is an open-source bias audit toolkit for data scientists, machine learning researchers, and policymakers. The toolkit is created by researchers at the Center for Data Science and Public Policy, University of Chicago. It enables users to easily test models for several biases and fairness metrics concerning multiple population sub-groups. In addition, it helps audit ML models for discrimination and bias. Aequitas can be used as a web audit tool, Python library and command-line tool.

Find the library here

Read the paper here

Visit the project website

DeepVis Toolbox 

The DeepVis researchers have introduced two tools for visualising and interpreting neural nets in a deep learning paper. The first tool visualises the activations produced on each layer of a trained convnet as it processes an image or video. ​​Tracking live activations that change in response to user input helps build valuable intuitions about how convnets work. The second tool visualises features at each layer of a DNN via regularised optimisation in image space. The DeepVis Toolbox is the code required to run the Deep Visualization Toolbox and generate the neuron-by-neuron visualisations using regularised optimisation. 

Find the library here

Read the paper here

Visit the project website

IBM AI Explainability 360 

IBM’s Toolkit is an open-source toolkit to help developers comprehend how machine learning models predict labels by various means throughout the AI application lifecycle. It consists of eight state-of-the-art algorithms covering different dimensions of explanations along with proxy explainability metrics. The researchers also provide a taxonomy to assist entities requiring explanations navigate the space of explanation methods and an extensible software for data scientists to organise methods according to their place in the AI modelling pipeline. 

Find the library here

Read the paper here

Visit the project website

TensorFlow libraries

Tensorboard’s WhatIf is a screen to analyse the interactions between inference results and data inputs. It allows developers to visually probe the behaviour of trained machine learning models with minimal coding. Tensorflow’s cleverhans is an adversarial example library to benchmark machine learning systems’ vulnerability to adversarial examples. Tensorflow’s lucid is a collection of infrastructure and tools for research in neural network interpretability. Lastly, Tensorflow’s Model Analysis is a library for evaluating TensorFlow models. It allows users to evaluate their models on large amounts of data in a distributed manner, using the same metrics defined in their trainer.

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Avi Gopani

Avi Gopani

Avi Gopani is a technology journalist that seeks to analyse industry trends and developments from an interdisciplinary perspective at Analytics India Magazine. Her articles chronicle cultural, political and social stories that are curated with a focus on the evolving technologies of artificial intelligence and data analytics.

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories