Council Post: Explainable AI and its impact on creating a data-driven culture

Council Post: Explainable AI and its impact on creating a data-driven culture

Over the years, the AI & ML domain has evolved by leaps and bounds. Despite the progress, the AI/ML models suffer from a few challenges, including:

  • Lack of explainability and trust.
  • Security, privacy, and ethical regulations.
  • Bias in AI systems.

These challenges can make or break AI systems. 

With the rapid evolution of ML, various metrics linked to accuracy have gained importance, calling for explainability AI (XAI). As shown in the scatter plot below, accuracy and the explainability of machine learning models are suspect. For instance, deep learning techniques are higher in accuracy, whereas decision trees are poor in terms of performance and good in explainability. 


Sign up for your weekly dose of what's up in emerging technology.

Enter explainable AI (XAI)

As the model gets complex, developers often fail to understand why the system has arrived at a specific decision. This is where explainable AI comes into play.

Explainable AI (XAI) aims to address how black box decisions of AI systems are made. According to ResearchandMarkets, the global XAI market size is estimated to touch USD 21.03 billion by 2030, growing at a CAGR of 19% (2021-2030).

Download our Mobile App

XAI is a catch-all term for the movements, initiatives, and efforts made in response to AI transparency and trust issues.

According to the defence advanced research projects agency (DARPA), XAI aims to produce more explainable ML models with a high level of prediction accuracy.

Today, explainable AI (XAI) is a hot topic across industries, including retail, healthcare, media and entertainment, aerospace and defence. For example, in retail, XAI helps predict upcoming trends, with a logical reasoning to boot, allowing the retailer to manage inventory better. In ecommerce, the explainable AI help makes sense of the suggestions of the recommendation system based on customers’ search history and spending habits. 

Need for XAI

In general, the need for explaining AI system arise from four reasons: 

Explain to justify: XAI ensures an auditable and provable way to defend algorithmic decisions being fair and ethical, which leads to building trust. 

Explain to control: Understanding system behaviour provides greater visibility over unknown vulnerabilities and flaws. This helps to identify and correct errors, thus enabling control rapidly. 

Explain to improve: As users know why the system produced specific output, they will also know how to make it smarter. Thus, XAI could be the foundation for further iterations and improvements. 

Explain to discover: Asking for explanation can help the users learn new facts and gather actionable intel from the data.

Data-driven culture 

Interpretable ML is a core concept of XAI and helps embed trust in AI systems, and brings fairness (make predictions without discernible bias), accountability (trace predictions reliably back to something or someone), and transparency (explain how and why predictions are made). 

Most importantly, once you have an understanding of how the decisions are made by the AI system, it leads to better AI governance within the organisation and improves the model performance.

Also, knowing why and how the model works and why it fails enable ML engineers and data scientists to optimise the model, and helps in creating a data-driven culture. For instance, understanding the model behaviour for various input data distributions helps explain the biases in the input data that ML engineers can use to make adjustments and generate a more fair and robust model.

This article is written by a member of the AIM Leaders Council. AIM Leaders Council is an invitation-only forum of senior executives in the Data Science and Analytics industry. To check if you are eligible for a membership, please fill out the form here.

Support independent technology journalism

Get exclusive, premium content, ads-free experience & more

Rs. 299/month

Subscribe now for a 7-day free trial

More Great AIM Stories

Anirban Nandi
With close to 15 years of professional experience, Anirban specialises in Data Sciences, Business Analytics, and Data Engineering, spanning various verticals of online and offline Retail and building analytics teams from the ground up. Following his Masters from JNU in Economics, Anirban started his career at Target and spent more than eight years developing in-house products like Customer Personalisation, Recommendation Systems, and Search Engine Classifiers. Post Target, Anirban became one of the founding members at Data Labs (Landmark Group) and spent more than 4.5 years building the onshore and offshore team of ~100 members working on Assortment, Inventory, Pricing, Marketing, eCommerce and Customer analytics solutions.

AIM Upcoming Events

Early Bird Passes expire on 3rd Feb

Conference, in-person (Bangalore)
Rising 2023 | Women in Tech Conference
16-17th Mar, 2023

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox

All you need to know about Graph Embeddings

Embeddings can be the subgroups of a group, similarly, in graph theory embedding of a graph can be considered as a representation of a graph on a surface, where points of that surface are made up of vertices and arcs are made up of edges