Council Post: Explainable AI and its impact on creating a data-driven culture

Explainable AI (XAI) aims to address how black box decisions of AI systems are made.
Council Post: Explainable AI and its impact on creating a data-driven culture
Image © Council Post: Explainable AI and its impact on creating a data-driven culture

Over the years, the AI & ML domain has evolved by leaps and bounds. Despite the progress, the AI/ML models suffer from a few challenges, including:

  • Lack of explainability and trust.
  • Security, privacy, and ethical regulations.
  • Bias in AI systems.

These challenges can make or break AI systems. 

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

With the rapid evolution of ML, various metrics linked to accuracy have gained importance, calling for explainability AI (XAI). As shown in the scatter plot below, accuracy and the explainability of machine learning models are suspect. For instance, deep learning techniques are higher in accuracy, whereas decision trees are poor in terms of performance and good in explainability. 

Enter explainable AI (XAI)

As the model gets complex, developers often fail to understand why the system has arrived at a specific decision. This is where explainable AI comes into play.

Explainable AI (XAI) aims to address how black box decisions of AI systems are made. According to ResearchandMarkets, the global XAI market size is estimated to touch USD 21.03 billion by 2030, growing at a CAGR of 19% (2021-2030).

XAI is a catch-all term for the movements, initiatives, and efforts made in response to AI transparency and trust issues.

According to the defence advanced research projects agency (DARPA), XAI aims to produce more explainable ML models with a high level of prediction accuracy.

Today, explainable AI (XAI) is a hot topic across industries, including retail, healthcare, media and entertainment, aerospace and defence. For example, in retail, XAI helps predict upcoming trends, with a logical reasoning to boot, allowing the retailer to manage inventory better. In ecommerce, the explainable AI help makes sense of the suggestions of the recommendation system based on customers’ search history and spending habits. 

Need for XAI

In general, the need for explaining AI system arise from four reasons: 

Explain to justify: XAI ensures an auditable and provable way to defend algorithmic decisions being fair and ethical, which leads to building trust. 

Explain to control: Understanding system behaviour provides greater visibility over unknown vulnerabilities and flaws. This helps to identify and correct errors, thus enabling control rapidly. 

Explain to improve: As users know why the system produced specific output, they will also know how to make it smarter. Thus, XAI could be the foundation for further iterations and improvements. 

Explain to discover: Asking for explanation can help the users learn new facts and gather actionable intel from the data.

Data-driven culture 

Interpretable ML is a core concept of XAI and helps embed trust in AI systems, and brings fairness (make predictions without discernible bias), accountability (trace predictions reliably back to something or someone), and transparency (explain how and why predictions are made). 

Most importantly, once you have an understanding of how the decisions are made by the AI system, it leads to better AI governance within the organisation and improves the model performance.

Also, knowing why and how the model works and why it fails enable ML engineers and data scientists to optimise the model, and helps in creating a data-driven culture. For instance, understanding the model behaviour for various input data distributions helps explain the biases in the input data that ML engineers can use to make adjustments and generate a more fair and robust model.

This article is written by a member of the AIM Leaders Council. AIM Leaders Council is an invitation-only forum of senior executives in the Data Science and Analytics industry. To check if you are eligible for a membership, please fill out the form here.

More Great AIM Stories

Anirban Nandi
With close to 15 years of professional experience, Anirban specialises in Data Sciences, Business Analytics, and Data Engineering, spanning various verticals of online and offline Retail and building analytics teams from the ground up. Following his Masters from JNU in Economics, Anirban started his career at Target and spent more than eight years developing in-house products like Customer Personalisation, Recommendation Systems, and Search Engine Classifiers. Post Target, Anirban became one of the founding members at Data Labs (Landmark Group) and spent more than 4.5 years building the onshore and offshore team of ~100 members working on Assortment, Inventory, Pricing, Marketing, eCommerce and Customer analytics solutions.

Our Upcoming Events

Masterclass, Virtual
How to achieve real-time AI inference on your CPU
7th Jul

Masterclass, Virtual
How to power applications for the data-driven economy
20th Jul

Conference, in-person (Bangalore)
Cypher 2022
21-23rd Sep

Conference, Virtual
Deep Learning DevCon 2022
29th Oct

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM
MOST POPULAR

What can SEBI learn from casinos?

It is said that casino AI technology comes with superior risk management systems compared to traditional data analytics that regulators are currently using.

Will Tesla Make (it) in India?

Tesla has struggled with optimising their production because Musk has been intent on manufacturing all the car’s parts independent of other suppliers since 2017.