Advertisement

Understanding Explainability In Computer Vision

The session “Explainable AI For Computer Vision” was presented at the first of its kind Computer Vision conference, CVDC 2020 by Avni Gupta, who is the Technology Lead at Synduit. Organised by the Association of Data Scientists (ADaSCi), the premier global professional body of data science and machine learning professionals, it is a first-of-its-kind virtual conference on Computer Vision.  

The primary aspect of the talk is computer vision models most of the time act as a black box and it is hard to explain what is actually going behind the models or how the outcomes are coming from. She also mentioned some of the important libraries which can help to make explainable AI possible in Computer Vision models.

According to Gupta, many a time, when developers create a computer vision model, they find themselves interacting with a backbox and unaware of what feature extraction is happening at each layer. With the help of explainable AI, it becomes easier to comprehend and know when enough layers have been added and what feature extraction has taken place at each layer.

Gupta started the talk discussing why explainable AI or XAI is important. She put forward some important points as mentioned below-

  • Understandable AI: It includes the reasons and justifications to support actionable recommendations for faster and more accurate decisions.
  • Transparent AI: It includes the interpretability of predictions accuracy with the ability to trace data to the underlying logic and data.
  • Impactful AI: The impactful AI provides an assessment of future business outcomes and allows scenario simulations to determine the best actions.

Gupta stated that there are various problems related to a Computer Vision model, which are 

  1. ML and AI models are black-box models as it is hard to understand how the models are predicting the outcomes.
  2. ML models are non-intuitive, which are difficult for stakeholders to understand
  3. The key issues are trust, reliability and accountability.

She then discussed some of the important techniques that can be used for interpreting Computer Vision models. The techniques are-

SHAP Gradient Explainer

SHAP (SHapley Additive exPlanations) is a game-theoretic approach to explain the output of any machine learning model. It connects optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions. SHAP Gradient Explainer uses SHAP and integrated gradients as a combination for the equations.  

Visual Activation Layers

Visual activation layers are used when you create a Convoluted network and you want to view what exactly does your model speak.

Occlusion Sensitivity

Occlusion Sensitivity is used when you have an image and you want to grey out an area to check how occluding parts of an image affects your Conv network model.

Grad-CAM

Grad CAM or Gradient-weighted Class Activation Mapping is one of the most widely used techniques. It is a technique for making Convolutional Neural Network (CNN)-based models more transparent by visualising the regions of input that are “important” for predictions from these models – or visual explanations.

Integrated Gradients

Integrated Gradients is a variation on computing the gradient of the prediction output w.r.t. features of the input.

Furthermore, Gupta discussed some of the crucial libraries and tools that are helpful while building an explainable Computer Vision Model with some practical implementations in Python language.

The libraries and tools are-

  • ELI5: ELI5 is a Python package which helps to debug machine learning classifiers and explain their predictions. It provides support for popular machine learning frameworks and packages like Scikit- Learn, XGBoost, LightGBM, etc.
  • tf-explain: tf-explain offers interpretability methods for Tensorflow 2.0 to ease neural network’s understanding. With either its core API or its tf.keras callbacks, you can get feedback on the training of your models.
  • AIX-360: The AI Explainability 360 toolkit is an open-source library that supports interpretability and explainability of datasets and machine learning models. The AI Explainability 360 Python package includes a comprehensive set of algorithms that cover different dimensions of explanations along with proxy explainability metrics.
  • Tootorch: tootorch is for PyTorch models for implementation XAI in Computer Vision techniques.
  • What-if Tool:What-If Tool is a feature of the open-source TensorBoard web application, which lets users analyze an ML model without writing code.

Download our Mobile App

Ambika Choudhury
A Technical Journalist who loves writing about Machine Learning and Artificial Intelligence. A lover of music, writing and learning something out of the box.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Upcoming Events

15th June | Online

Building LLM powered applications using LangChain

17th June | Online

Mastering LangChain: A Hands-on Workshop for Building Generative AI Applications

Jun 23, 2023 | Bangalore

MachineCon 2023 India

26th June | Online

Accelerating inference for every workload with TensorRT

MachineCon 2023 USA

Jul 21, 2023 | New York

Cypher 2023

Oct 11-13, 2023 | Bangalore

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR

Is Sam Altman a Hypocrite? 

While on the one hand, Altman is advocating for the international community to build strong AI regulations, he is also worried when someone finally decides to regulate it