A Human-Centric Approach Towards Explainable AI

New-age technologies like AI and machine learning offer transformational opportunities for the human species as a whole. Humans have the power to shape and apply technology to create positive change and improve lives. Hence, it is critical to ensure that our governments and industry leaders take a human-centric approach to maximise the impact of technologies to benefit the greater common good. 

In that light, building machine learning systems that are reliable, trustworthy, and fair requires relevant stakeholders — including developers, users, and ultimately the people affected by these systems — to have a basic understanding of how they work.

That’s where intelligibility comes in. Also known as interoperability or explainability, it’s a feature where a system can explain what it knows, how it knows and what it’s doing.

AIM Daily XO

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

A recently published Microsoft research argues why and how intelligibility should be human-centric and proposes a pathway to achieve this. 

Challenges In Achieving Intelligibility

In choosing the right approach for explainability, machine learning researchers have proposed many techniques in the past. The most common approach is to design and deploy models that are easy enough to explain in terms of its decision-making through words or the aid of visualisations/tools. Here the model itself explains its behaviour. There is significant evidence that simpler models are as accurate as the complex models, and hence researchers always encourage the use of simpler models wherever possible.


Download our Mobile App



For the complex models, many post-hoc explanations have been proposed. These techniques estimate each feature’s “importance” in generating a prediction for a particular data point. However, research has indicated post-hoc explanations do not always reflect the models’ true behaviour and in a sense, should be viewed as post-hoc justifications rather than causal explanations. 

Moreover, multiple stakeholders are impacted by ML algorithms, including the users and the developers. Hence, a single universal method to achieve machine learning intelligibility is not feasible. For instance, explaining why a bank loan was rejected is not helpful for the developer to debug their code.

Within the machine learning community, the need for intelligibility arises to improve the algorithm’s robustness or gain buy-in from a customer who wants more explainability. In such cases, practitioners lean towards simpler models. Outside the machine learning community, end-users usually demand more intelligibility to understand why particular decisions were made.

Microsoft researchers argue that the design or use of any intelligibility technique must start with an investigation of which stakeholder needs it.

A Human-Centric Approach

Identifying the right intelligibility technique for specific stakeholders is not straightforward. Despite the efforts made by several researchers to come up with various intelligible methods, there has been very little evaluation to see if they help. This is because evaluations present many challenges in terms of the expertise needed to carry them out, the separation of effects on the model and the technique during analysis, and the mimicking of realistic settings needed for the participants during experimentation.

Researchers’ past efforts to analyse the effectiveness of intelligibility have shown that very few participants were able to accurately describe the tools. This gap results from the fact that users of these models are not just ‘passive consumers’ but ‘active partners‘ who form their own ‘mental models’ when interacting with these systems.  

Thus, for the better development of mental models among people, we need to design intelligibility techniques that facilitate to-and-fro communication between people and systems. This is because people’s mental models and systems are influenced by their social context, shared through communication. One way to do that is to achieve it through interactive explanations.

The researchers argue that beyond just achieving explainability in terms of the model, we should also aim to achieve intelligibility of other algorithm components, including datasets, training algorithms, performance metrics, and even errors. This can help uncover hidden assumptions and mitigate fairness issues. 

There is a great need for tight integration between the machine learning developers and HCI communities. “Intelligibility” has a strong history in HCI, and human-centred fields like psychology and anthropology will help develop a more comprehensive understanding of the techniques to be developed. For instance, they can use ‘communication theory‘ to improve interactive explanations mentioned above.

Sign up for The Deep Learning Podcast

by Vijayalakshmi Anandan

The Deep Learning Curve is a technology-based podcast hosted by Vijayalakshmi Anandan - Video Presenter and Podcaster at Analytics India Magazine. This podcast is the narrator's journey of curiosity and discovery in the world of technology.

Kashyap Raibagi
Kashyap currently works as a Tech Journalist at Analytics India Magazine (AIM). Reach out at kashyap.raibagi@analyticsindiamag.com

Our Upcoming Events

24th Mar, 2023 | Webinar
Women-in-Tech: Are you ready for the Techade

27-28th Apr, 2023 I Bangalore
Data Engineering Summit (DES) 2023

23 Jun, 2023 | Bangalore
MachineCon India 2023 [AI100 Awards]

21 Jul, 2023 | New York
MachineCon USA 2023 [AI100 Awards]

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR

Council Post: The Rise of Generative AI and Living Content

In this era of content, the use of technology, such as AI and data analytics, is becoming increasingly important as it can help content creators personalise their content, improve its quality, and reach their target audience with greater efficacy. AI writing has arrived and is here to stay. Once we overcome the initial need to cling to our conventional methods, we can begin to be more receptive to the tremendous opportunities that these technologies present.