Explainable AI (XAI) is key to establishing trust among users and fighting the black-box nature of machine learning models. In general, XAI enhances accountability and reliability in machine learning models. For a long time, tech giants like Google, IBM and others have poured resources on explainable AI to explain the decision-making process of such models.
Below are the top free resources to understand Explainable AI (XAI) in detail.
(The list is in no particular order)
1| Explainable Machine Learning with LIME and H2O in R
About: Explainable Machine Learning with LIME and H2O in R is a hands-on, guided introduction to explainable machine learning. The topics covered include the introduction and project overview, importing libraries, preprocessing data using Recipes, running autoML, leaderboard exploration, and model performance exploration, etc. By the end of this project, you will be able to use the H2O and LIME packages in R for automatic and interpretable machine learning. You will also learn how to build classification models quickly with H2O AutoML and interpret model predictions using LIME.
Know more here.
2| An Introduction to Explainable AI, and Why We Need it
About: This is an online tutorial. You will be exposed to a brief introduction to explainable AI, how it works and its importance. The blog will help you understand the Reversed Time Attention Model (RETAIN) model, Local Interpretable Model-Agnostic Explanations (LIME), how explainable AI works as one generates newer and more innovative applications for neural networks. Author Patrick Ferris explainS all of these through instances such as the one-pixel attack.
Know more here.
3| Getting a Window into your Black Box Model
About: In this tutorial, you will learn how to make sense of parts of a complex black-box model. The tutorial has two main goals. The first is to show how to build the windows or a local linear surrogate model based on a complex global model. The second goal is to explain reason codes, which helps in understanding the factors driving a prediction.
Know more here.
4| Explainable AI: Scene Classification and GradCam Visualization
About: This is a 2-hour long hands-on project where you will learn to train machine learning and deep learning models to predict the type of scenery in images. You will also understand the theory behind the deep neural networks, convolutional neural networks (CNNs) and residual nets. You will learn how to build a deep learning model based on CNNs and Residual blocks with the help of Keras with Tensorflow 2.0 as a backend and more.
Know more here.
5| Explaining Quantitative Measures of Fairness
About: This is a hands-on article that connects explainable AI methods with fairness measures and shows how modern explainability methods can enhance the usefulness of quantitative fairness metrics. You will also learn how to decompose measures of fairness and allocate responsibility for any observed disparity among each of the model’s input features. The tutorial will help you understand not just how to choose the “correct” measure of model fairness but rather about explaining whichever metric you have chosen.
Know more here.
6| Interpretable Machine Learning Applications: Part 1 & 2
About: This is a project-based course for beginners to create interpretable machine learning applications on classification regression models, decision tree and random forest classifiers. In the first part, you will learn how to explain such prediction models by extracting the most important features and their values. In the second part, you will learn how to develop interpretable machine learning applications explaining individual predictions rather than the behaviour of the prediction model as a whole.
For part 1, click here.
For part 2, click here.
7| Responsible Machine Learning with Python
About: This is a series of notebooks that introduce several approaches that increase transparency, accountability, and trustworthiness in ML models. The notebooks highlight techniques such as monotonic XGBoost models, partial dependence, individual conditional expectation plots, and Shapley explanations, decision tree surrogates, reason codes, and ensembles of explanations, LIME, etc.
Know more here.