Advertisement

Explainable AI For Decision Making Systems In Medical Domain

The COVID-19 outbreak has spotlighted the need for responsive and transparent systems for monitoring health. Good health information has never been crucial. 

Researchers from Aalto University, Umea University and KTH Royal Institute of Technology investigated the effectiveness of explainable artificial intelligence approaches for decision-making in medical image analysis. Three machine learning methods, including LIME, SHAP and CIU, were implemented to improve the understanding of decisions made by CNNs.

Research methodology

The three post-hoc explanatory ML models LIME, SHAP and CIU, were applied to gastral images obtained with the help of Capsule Endoscopy – a pill-sized video camera is used to look for possible signs of polyps, ulcers and tumours in the small intestine.

Researchers divided the entire process into four different parts: data pre-processing, CNN model application, LIME, SHAP, and CIU explanation generation, and assessment of human decision-making.

Two sets, with more than 3,500 images and 600 images, were taken. The datasets were culled from a 10-hour long video recorded via VCE and were split randomly into training and validation sets for results evaluation. 

“A CNN model with 50 epochs and a batch size of 16 was used to train the data set and achieve a validation accuracy of 98.58%. We trained our CNN model based on labels assigned to each image to recognise the bleeding versus normal (non-bleeding) medical images. The labels were made using the repository’s annotated images as a reference point,” as per the study.

Results

Three different explainable methods, Local Interpretable Model Agnostic Explanations (LIME), Shapley Additive exPlanations (SHAP) and Contextual Importance and Utility (CIU), were used. Researchers used Python to implement the LIME and SHAP explainable methods on Aalto University’s Triton high-performance computing cluster, while CIU explanations were created using RStudio Version 1.2.1335. Plus, an unexplainable setting was included for comparative analysis. Three resolutions were obtained.

LIME was tested on all validation data sets. In the case of a bleeding image, LIME explanation was used to identify the area contributing positively to the bleeding class. In the case of a non-bleeding image, LIME explanation was used to determine the area contributing to the non-bleeding class. The time required by the LIME model to generate explanations was around 11 seconds per image.

Researchers applied the model agnostic Kernel SHAP on a super-pixel segmented image to explain CNN’s image predictions and were similarly tested on all validation data sets. Each picture representing contributions to both the bleeding and non-bleeding class has a SHAP description. The green colour that marks the important features of the photos represents support for the class (bleeding, non-bleeding), while the red colour represents opposition to the class (bleeding, non-bleeding). The time required by the SHAP model to generate explanations was around 10 seconds per image.

CIU provided explanations similar to LIME, depicting the important area on the image that contributes to the given class, either the bleeding or the non-bleeding. The time required by the CIU model to generate explanations was around 8 seconds per image. The least time is taken when compared to the LIME and SHAP.

The results from all three models were presented to the different set of participants, mostly from the STEM field. Firstly the results were provided without explanation support and then with explanation support.  

For the LIME model, when participants received the results with explanation – the mean of correct decisions was 8.8 out of 12 answers in total. Again, if we look at the participants provided with SHAP explanation – the mean of correct answers was 8.4 out of 12. While, when presented with CIU – the mean for the same was 10.2 out of 12 answers. Thereby, the CIU model outperformed the other two, LIME and SHAP.

“Our results support that participants with CIU will perform better in understanding the provided explanations and by that better distinguish between correct and incorrect explanation in comparison to participants having LIME or SHAP explanation support. Users with CIU explanation support were significantly better at recognising incorrect explanations in comparison to those having LIME explanations and also to some extent better than those having SHAP explanation support,” the research paper explained.

Download our Mobile App

kumar Gandharv
Kumar Gandharv, PGD in English Journalism (IIMC, Delhi), is setting out on a journey as a tech Journalist at AIM. A keen observer of National and IR-related news.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Upcoming Events

15th June | Bangalore

Future Ready | Lead the AI Era Summit

15th June | Online

Building LLM powered applications using LangChain

17th June | Online

Mastering LangChain: A Hands-on Workshop for Building Generative AI Applications

20th June | Bangalore

Women in Data Science (WiDS) by Intuit India

Jun 23, 2023 | Bangalore

MachineCon 2023 India

26th June | Online

Accelerating inference for every workload with TensorRT

MachineCon 2023 USA

Jul 21, 2023 | New York

Cypher 2023

Oct 11-13, 2023 | Bangalore

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR