Shapley values is an attribution method from Cooperative Game theory developed by economist Lloyd Shapley. It has recently garnered attention for being a powerful method to explain predictions of ML learning models. It is a widely used approach, adopted from cooperative game theory, that comes with desirable properties. For example, it is predominantly applied to ML models in the loaning industry to explain why someone has been denied a loan. This article presents to the reader the top resources to learn about Shapley Values in Machine learning.
The machine learning explainability course by Kaggle
Kaggle has a tutorial on Shap Values. SHAP Values (an acronym from Shapley Additive exPlanations) break down a prediction to show the impact of each feature. The tutorial provides a guide on how the Shap Values work and how to interpret them. Furthermore, it teaches how to calculate the code for the Shap value. Finally, the tutorial provides the learner with a problem that can be solved by applying Shap values.
Sign up for your weekly dose of what's up in emerging technology.
Find the free tutorial here.
Shap Python tutorial by GitHub
Shap Python tutorial by GitHub is a practical hands-on introduction to explaining machine learning models with Shapley values. The tutorial is designed to help build a solid understanding of computing and interpreting Shapley-based explanations of machine learning models. The broad topics that are covered in the tutorial are:
- Introduction to explainable AI with Shapley values
- Precision when interpreting predictive models in search of causal insights
- Quantitative measures of fairness
The tutorial explains how Shapley values are applied to text, tabular, genomic, and image examples. It is a living document and serves as an introduction Shap Python package.
Find the free tutorial here.
Video Lecture by Fiddler AI
Fiddler AI released its series of explainer videos with the first video in the series dedicated to Shapley values – axioms, challenges, and how it applies to the explainability of ML models. The 12-minute lecture uploaded on YouTube was given by Dr Ankur Taly, head of data science at Fiddler Labs. An 82 slide PPT accompanies the lecture.
Interpretable Machine Learning: A Guide for Making Black Box Models Explainable
By Christoph Molnar · 2020
This book by Christoph Molnar is about making machine learning models and their decisions interpretable. The chapters focus on general model-agnostic methods for interpreting black box models like feature importance and accumulated local effects and explaining individual predictions with Shapley values. Some chapters of the book dive into the kernelSHAP, treeSHAP, SHAP feature importance, SHAP dependence plot, SHAP summary plot, and advantages and disadvantages of SHAP.
Find the free eBook version here.
Explainable AI with Python
By Leonida Gianfagna, Antonio Di Cecco · 2021
This book provides a comprehensive presentation of the current concepts and available techniques to make “machine learning” systems more explainable. The book’s chapter on Model Agnostic Methods for XAI describes the local expansion of XAI with SHAP, KernelSHAP, and TreeSHAP. Explainable AI with Python has been published by Springer, and the eBook version can be bought on Springer Shop.
Find the book here.