7 Free Resources To Learn Explainable AI


Explainable AI (XAI) is key to establishing trust among users and fighting the black-box nature of machine learning models. In general, XAI enhances accountability and reliability in machine learning models. For a long time, tech giants like Google, IBM and others have poured resources on explainable AI to explain the decision-making process of such models. 

Below are the top free resources to understand Explainable AI (XAI) in detail.

(The list is in no particular order)

AIM Daily XO

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

1| Explainable Machine Learning with LIME and H2O in R

About: Explainable Machine Learning with LIME and H2O in R is a hands-on, guided introduction to explainable machine learning. The topics covered include the introduction and project overview, importing libraries, preprocessing data using Recipes, running autoML, leaderboard exploration, and model performance exploration, etc. By the end of this project, you will be able to use the H2O and LIME packages in R for automatic and interpretable machine learning. You will also learn how to build classification models quickly with H2O AutoML and interpret model predictions using LIME. 

Know more here.

Download our Mobile App

2| An Introduction to Explainable AI, and Why We Need it

About: This is an online tutorial. You will be exposed to a brief introduction to explainable AI, how it works and its importance. The blog will help you understand the Reversed Time Attention Model (RETAIN) model, Local Interpretable Model-Agnostic Explanations (LIME), how explainable AI works as one generates newer and more innovative applications for neural networks. Author Patrick Ferris explainS all of these through instances such as the one-pixel attack. 

Know more here.

3| Getting a Window into your Black Box Model

About: In this tutorial, you will learn how to make sense of parts of a complex black-box model. The tutorial has two main goals. The first is to show how to build the windows or a local linear surrogate model based on a complex global model. The second goal is to explain reason codes, which helps in understanding the factors driving a prediction.

Know more here.

4| Explainable AI: Scene Classification and GradCam Visualization

About: This is a 2-hour long hands-on project where you will learn to train machine learning and deep learning models to predict the type of scenery in images. You will also understand the theory behind the deep neural networks, convolutional neural networks (CNNs) and residual nets. You will learn how to build a deep learning model based on CNNs and Residual blocks with the help of Keras with Tensorflow 2.0 as a backend and more.

Know more here.

5| Explaining Quantitative Measures of Fairness

About: This is a hands-on article that connects explainable AI methods with fairness measures and shows how modern explainability methods can enhance the usefulness of quantitative fairness metrics. You will also learn how to decompose measures of fairness and allocate responsibility for any observed disparity among each of the model’s input features. The tutorial will help you understand not just how to choose the “correct” measure of model fairness but rather about explaining whichever metric you have chosen.

Know more here.

6| Interpretable Machine Learning Applications: Part 1 & 2

About: This is a project-based course for beginners to create interpretable machine learning applications on classification regression models, decision tree and random forest classifiers. In the first part, you will learn how to explain such prediction models by extracting the most important features and their values. In the second part, you will learn how to develop interpretable machine learning applications explaining individual predictions rather than the behaviour of the prediction model as a whole. 

For part 1, click here.

For part 2, click here.

7| Responsible Machine Learning with Python

About: This is a series of notebooks that introduce several approaches that increase transparency, accountability, and trustworthiness in ML models. The notebooks highlight techniques such as monotonic XGBoost models, partial dependence, individual conditional expectation plots, and Shapley explanations, decision tree surrogates, reason codes, and ensembles of explanations, LIME, etc.

Know more here.

Sign up for The Deep Learning Podcast

by Vijayalakshmi Anandan

The Deep Learning Curve is a technology-based podcast hosted by Vijayalakshmi Anandan - Video Presenter and Podcaster at Analytics India Magazine. This podcast is the narrator's journey of curiosity and discovery in the world of technology.

Ambika Choudhury
A Technical Journalist who loves writing about Machine Learning and Artificial Intelligence. A lover of music, writing and learning something out of the box.

Our Upcoming Events

27-28th Apr, 2023 I Bangalore
Data Engineering Summit (DES) 2023

23 Jun, 2023 | Bangalore
MachineCon India 2023

21 Jul, 2023 | New York
MachineCon USA 2023

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox

The Great Indian IT Reshuffling

While both the top guns of TCS and Tech Mahindra are reflecting rather positive signs to the media, the reason behind the resignations is far more grave.

OpenAI, a Data Scavenging Company for Microsoft

While it might be true that the investment was for furthering AI research, this partnership is also providing Microsoft with one of the greatest assets of this digital age, data​​, and—perhaps to make it worse—that data might be yours.