MITB Banner

A Greater Foundation For The Triumph Of Deep Learning With XAI

Share

A Greater Foundation For The Triumph Of Deep Learning With XAI

Illustration by A Greater Foundation For The Triumph Of Deep Learning With XAI

Asimov back in the day, to avoid the perils and dangers of robots taking over the humans, set three rules to restrict the behaviour of robots, such as a robot cannot hurt a human or injure and a robot must always obey the orders except where there’s a conflict to the first law. At the same time, the robot should protect their own existence without any conflict to first law and second law. Instead of encoding the robots with millions of rules, empowering the robots to choose the right solution in a dynamic environment with both discrete and continuous spaces would help understand the behaviour of the robotics or machine learning algorithms. 

The book covers XAI white-box models for the explainability and interpretability of algorithms with transparency for the accuracy of the predictable outcomes and results from XAI applications keeping in view of ethics. Denis Rothman delves into XAI with Microsoft Azure machine learning tool SHAP (SHapley Additive exPlanations) for machine learning model interpretability with Python Jupyter notebooks. 

The book also covers AI fairness with WIT (Google’s What-if Tool) from ethical and legal perspectives. It’s critical for enterprises to rely on algorithmic applications and results. This can only happen where we have a clear explanation with local interpretable model-agnostic explanations (LIME’s), approach for increasing the credibility of a machine learning model and trusting a prediction, as some of these algorithms get applied by industries a number of times throughout the day, and it’s essential to have an interpretation of these results of the predictions locally. 

Denis Rothman shows how to install LIME, SHAP, and WIT tools and the ethical standards to maintain balanced datasets with best practices and principles. The early predictions of acute, chronic diseases and illnesses have a potential influence on the deterioration of the patient’s vital signs, especially with blood pressure, body temperature, heart rate, and respiration rate. The book covers machine learning algorithms in Python (KNN algorithm) with standard AI programs for making predictions of acute illnesses. 

Applying the XAI algorithm has higher predictable power of AI, as the standard AI black-box predictions are not easily explainable to the clinicians. The book covers enhancing the AI diagnosis with XAI by enhancing the KNN algorithm with location history. There are several ethical dilemmas in making a choice by the SDC’s autopilot aided by deep learning algorithms through reinforcement learning, clustering, regression, and classification algorithms. This book further covers building decision-making trees for SDC’s autopilot with standard AI and explores the options of XAI applied to SDC’s autopilot decision trees. Enterprises need to avoid AI data poisoning and leverage statistical visualisation tools such as Google’s Facets to improve the explainability of machine learning models. There are useful implementations in Python for Facets in the book. The book is a recommended read for the data scientists. 

DeepLIFT is an exploration method that works as a recursive prediction explanation method applied in deep learning. The classic shapely SHAP value estimation computes the Shapely regression values, which are feature importance for explanatory variables in a multi regression model of multicollinearity. The classic Shapely value estimation retrains the model for all the feature subsets SS ⊆K. We can denote K as the set of all the features. The method assigns the importance value for each feature from the K set for model prediction. 

For such computation of model prediction, the model is trained fSS∪{i}  t for the available feature, and another machine learning model is trained with fSS, the feature that has been withheld. In classic Shapely value estimation, both these machine learning models are compared with the output of the current state of fSS∪{i}  (ZSS∪{i}) – fSS (ZSS). The ZSS denotes the input feature values from the Set SS

The weighted average of all possible feature attributions can be computed as follows:

SHAP (SHapley Additive exPlanations) is a machine learning suite that contains algorithms leveraged for the explainability of machine learning models with data visualisations. The product was built by Microsoft Researchers. Enterprises would like to leverage white box AI models that come with a suite of sophisticated algorithms and with the selection of datasets with best practices and principles. The Shapely values from the machine learning models provide the explainability with the measures of contribution.

You can install SHAP in the Google Colaboratory environment, ensure that your Python is > before the installation of !pip install shap command. 

Validate if your installation has been successful with the following command:

You can find code implementation examples of how to analyse the Shapely values of machine learning model with IMDB movie reviews dataset in the book by accessing the dataset directly from the Github repository https://github.com/slundberg/shap/ blob/master/shap/datasets.py. After you complete importing, the training datasets perform a unit test with data interception function and display the samples of the data for explainability. 

After the data transformation, we can split the data into training and testing datasets. Vectorise the dataset with Tfidvectorizer module by extracting the features with term frequency (TF) applying natural language processing techniques and display the vectorised text features. We can then create a linear model leveraging logistic regression algorithm and display the tokens or feature values of the text of Shapely values. 

You can find more examples of various datasets to analyse with explainability and interpretability of machine learning models with Google Facets, LIME, and WIT tools. LIME will help data scientists achieve explanations with a local-interpretable model-agnostic explanation to improve enterprise personnel’s credibility in the system. In stark contrast to SHAP, LIME explores the datasets to see if the machine learning model is locally faithful regardless of the model construction. The book is a recommended read for data scientists with implementations of Python, TensorFlow, and Google’s XAI machine learning toolkits.

Book link: https://www.amazon.com/Hands-Explainable-XAI-Python-trustworthy/dp/1800208138/

Share
Picture of Ganapathi Pulipaka

Ganapathi Pulipaka

Dr Ganapathi Pulipaka is Chief AI HPC Scientist and bestselling author of books covering AI infrastructure, supercomputing, high-performance computing for HPC, parallel computing, neural network architecture, data science, machine learning, and deep learning in C, C++, Java, Python, R, TensorFlow, and PyTorch on Linux, macOS, and Windows.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.