MITB Banner

Watch More

Council Post: Beyond Explainable AI—How to infuse trust in AI systems

Contractual trust is a concept that dictates and quantifies trustworthy AI.
Council Post: Beyond Explainable AI—How to infuse trust in AI systems

Design by Council Post: Beyond Explainable AI—How to infuse trust in AI systems

According to McKinsey, AI will add USD 13 trillion to the global economy by 2030. Despite the explosive growth, companies struggle to scale up their AI efforts and bring in bias-free algorithms or products. The widespread adoption of AI is necessitating the need for new standards and practices. The area of Explainable AI (XAI) is the most popular method to tackle the black-box nature of machine learning models. However, it is time to explore newer methods to build trust in AI systems.

As the next step in building trust, the researchers are focused on the bias assessment of AI systems. This article discusses a framework to evaluate trust at various stages of a typical data science workflow. We make a case for the need to go beyond XAI on the bias assessment for automated decision-making systems.

Why we need to move beyond XAI

Biases are an integral part of the human experience. Thanks to our skewed judgement, AI has inherited these biases from us. It becomes important for leaders to understand organisational and cultural barriers AI initiatives face and work to mitigate them by educating the workforce on ethics, changing the traditional mindsets and bringing in innovation with no biases.

We have seen plenty of racial, gender, and other biases in AI systems. For example, e-commerce giant Amazon rescinded a model–used to score job applicants–for penalising women. Content personalisation and ad ranking systems have also been in the dock for racial and gender profiling.

The bias starts way before the product gets deployed. In machine learning, this is known as ‘model bias.’ Machine learning models with 100 percent accuracy don’t always mean the model is stable and learning; ML practitioners and data scientists should also check bias in the data, algorithm, and model before production

XAI is a good tool to describe the model predictions in a human interpretable way. For example, feature attribution methods such as SHAP and LIME, in most cases, can render black-box machine models explainable. But can XIA alone create trustworthy AI? 

To build trust in AI, we need tools to mitigate bias. Ethical AI is mainly concerned with bias detection before and after the model predictions. However, Ethical AI’s bias mitigation prowess is minimal, and we have yet to develop robust and easily deployable techniques.

Below, we look at phases of a typical data science workflow where the trust can be well-defined and evaluated. Most of the discussions are centred on introducing an empirical framework for evaluating trust in AI systems and the metrics used for this purpose at different stages of data science workflow.

Any data science activity such as technical robustness, safety, data governance, accountability, societal and environmental well-being can be seamlessly integrated into our framework. 

Human-AI trust platform is inspired by a research article by Alon Jacovi et.al on the topic of formalising trust in AI. First, we define the concept of contractual trust. 

Contractual Trust

Let’s start with a formal definition of trust given by Mayer et.al. If A anticipates that B will act in A’s best interest, and accepts vulnerability to B’s actions, then A trusts B. Here, A can be an organisation/user/system that entrusts another organisation/ data scientist (B) to make a trustworthy AI model. 

However, there are risks involved in the collaboration between A and B. However, A anticipates that B will execute their transaction in the best interest of A despite the vulnerability in the processes executed by B. This vulnerability in B is probable, and A is aware of those. For example, having a fair AI model can be A’s anticipation, and a drop in model performance can be B’s vulnerability in the course to achieve a fair AI model. The notion of trust exists if and only if these anticipations and vulnerabilities mutually co-exist and are acknowledged by both A and B.

Contractual trust is a concept that dictates and quantifies trustworthy AI. This is done by incorporating a set of phases in the data science workflow. The client and data science organisation agree about their orchestration and purpose. In other words, the client can set their anticipation about the phases to see if they meet predefined conditions or states. At the same time, the data science organisation can set the probable vulnerabilities that can happen in the pursuit of anticipation.

Once the anticipation and vulnerabilities are defined and acknowledged, we can form an action plan consisting of several checkpoints for evaluation at different stages of the data science workflow. The checkpoint can act as an evaluation mechanism for the data science team by the client. The checkpoints can be given different weightage according to the client requirements. For example, if a client needs data anonymisation as the principal constraint, it can have more weightage than the model performance or model bias. One obvious advantage of the checkpoint is that we can evaluate each stage with client participation and ensure transparency in the development process. This way, we can assign a Human-AI trust score for the entire data science pipeline.

Advantages

  1. Two-way solution: The client and data science team are part of the framework.
  2. The clients can clearly distinguish their expectations, referred to as anticipation in the framework.
  3. The data science team can list out the possible flaws that can happen, referred to as vulnerabilities.
  4. The trust framework is well-defined over a list of actions.
  5. The entire framework takes the interest of both parties into account.
  6. The checkpoints will enable stage-by-stage evaluation by the clients and ensures transparency.
  7. Based on the client’s evaluation, checkpoints can be updated, or new ones inserted.
  8. The framework has a scoring mechanism that evaluates the entire workflow with both parties involved.

This article is written by a member of the AIM Leaders Council. AIM Leaders Council is an invitation-only forum of senior executives in the Data Science and Analytics industry. To check if you are eligible for a membership, please fill the form here.

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Shashank Shekhar

Shashank Shekhar

Shashank is a Data Sciences leader with diverse experience across verticals including Telecom, CPG, Retail, Hitech and E-commerce domains. He is currently heading the Artificial Intelligence Labs at Subex. In the past, he has worked at VMware, Amazon, Flipkart and Target and has been involved in solving various complex business problems using Machine Learning and Deep Learning.

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories