7 Free Resources To Learn Explainable AI


Explainable AI (XAI) is key to establishing trust among users and fighting the black-box nature of machine learning models. In general, XAI enhances accountability and reliability in machine learning models. For a long time, tech giants like Google, IBM and others have poured resources on explainable AI to explain the decision-making process of such models. 

Below are the top free resources to understand Explainable AI (XAI) in detail.

(The list is in no particular order)

1| Explainable Machine Learning with LIME and H2O in R

About: Explainable Machine Learning with LIME and H2O in R is a hands-on, guided introduction to explainable machine learning. The topics covered include the introduction and project overview, importing libraries, preprocessing data using Recipes, running autoML, leaderboard exploration, and model performance exploration, etc. By the end of this project, you will be able to use the H2O and LIME packages in R for automatic and interpretable machine learning. You will also learn how to build classification models quickly with H2O AutoML and interpret model predictions using LIME. 

Know more here.

2| An Introduction to Explainable AI, and Why We Need it

About: This is an online tutorial. You will be exposed to a brief introduction to explainable AI, how it works and its importance. The blog will help you understand the Reversed Time Attention Model (RETAIN) model, Local Interpretable Model-Agnostic Explanations (LIME), how explainable AI works as one generates newer and more innovative applications for neural networks. Author Patrick Ferris explainS all of these through instances such as the one-pixel attack. 

Know more here.

3| Getting a Window into your Black Box Model

About: In this tutorial, you will learn how to make sense of parts of a complex black-box model. The tutorial has two main goals. The first is to show how to build the windows or a local linear surrogate model based on a complex global model. The second goal is to explain reason codes, which helps in understanding the factors driving a prediction.

Know more here.

4| Explainable AI: Scene Classification and GradCam Visualization

About: This is a 2-hour long hands-on project where you will learn to train machine learning and deep learning models to predict the type of scenery in images. You will also understand the theory behind the deep neural networks, convolutional neural networks (CNNs) and residual nets. You will learn how to build a deep learning model based on CNNs and Residual blocks with the help of Keras with Tensorflow 2.0 as a backend and more.

Know more here.

5| Explaining Quantitative Measures of Fairness

About: This is a hands-on article that connects explainable AI methods with fairness measures and shows how modern explainability methods can enhance the usefulness of quantitative fairness metrics. You will also learn how to decompose measures of fairness and allocate responsibility for any observed disparity among each of the model’s input features. The tutorial will help you understand not just how to choose the “correct” measure of model fairness but rather about explaining whichever metric you have chosen.

Know more here.

6| Interpretable Machine Learning Applications: Part 1 & 2

About: This is a project-based course for beginners to create interpretable machine learning applications on classification regression models, decision tree and random forest classifiers. In the first part, you will learn how to explain such prediction models by extracting the most important features and their values. In the second part, you will learn how to develop interpretable machine learning applications explaining individual predictions rather than the behaviour of the prediction model as a whole. 

For part 1, click here.

For part 2, click here.

7| Responsible Machine Learning with Python

About: This is a series of notebooks that introduce several approaches that increase transparency, accountability, and trustworthiness in ML models. The notebooks highlight techniques such as monotonic XGBoost models, partial dependence, individual conditional expectation plots, and Shapley explanations, decision tree surrogates, reason codes, and ensembles of explanations, LIME, etc.

Know more here.

Download our Mobile App

Ambika Choudhury
A Technical Journalist who loves writing about Machine Learning and Artificial Intelligence. A lover of music, writing and learning something out of the box.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Recent Stories

Our Upcoming Events

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox

6 IDEs Built for Rust

Rust IDEs aid efficient code development by offering features like code completion, syntax highlighting, linting, debugging tools, and code refactoring

Can OpenAI Save SoftBank? 

After a tumultuous investment spree with significant losses, will SoftBank’s plans to invest in OpenAI and other AI companies provide the boost it needs?

Oracle’s Grand Multicloud Gamble

“Cloud Should be Open,” says Larry at Oracle CloudWorld 2023, Las Vegas, recollecting his discussions with Microsoft chief Satya Nadella last week.