MITB Banner

A Human-Centric Approach Towards Explainable AI

Share

New-age technologies like AI and machine learning offer transformational opportunities for the human species as a whole. Humans have the power to shape and apply technology to create positive change and improve lives. Hence, it is critical to ensure that our governments and industry leaders take a human-centric approach to maximise the impact of technologies to benefit the greater common good. 

In that light, building machine learning systems that are reliable, trustworthy, and fair requires relevant stakeholders — including developers, users, and ultimately the people affected by these systems — to have a basic understanding of how they work.

That’s where intelligibility comes in. Also known as interoperability or explainability, it’s a feature where a system can explain what it knows, how it knows and what it’s doing.

A recently published Microsoft research argues why and how intelligibility should be human-centric and proposes a pathway to achieve this. 

Challenges In Achieving Intelligibility

In choosing the right approach for explainability, machine learning researchers have proposed many techniques in the past. The most common approach is to design and deploy models that are easy enough to explain in terms of its decision-making through words or the aid of visualisations/tools. Here the model itself explains its behaviour. There is significant evidence that simpler models are as accurate as the complex models, and hence researchers always encourage the use of simpler models wherever possible.

For the complex models, many post-hoc explanations have been proposed. These techniques estimate each feature’s “importance” in generating a prediction for a particular data point. However, research has indicated post-hoc explanations do not always reflect the models’ true behaviour and in a sense, should be viewed as post-hoc justifications rather than causal explanations. 

Moreover, multiple stakeholders are impacted by ML algorithms, including the users and the developers. Hence, a single universal method to achieve machine learning intelligibility is not feasible. For instance, explaining why a bank loan was rejected is not helpful for the developer to debug their code.

Within the machine learning community, the need for intelligibility arises to improve the algorithm’s robustness or gain buy-in from a customer who wants more explainability. In such cases, practitioners lean towards simpler models. Outside the machine learning community, end-users usually demand more intelligibility to understand why particular decisions were made.

Microsoft researchers argue that the design or use of any intelligibility technique must start with an investigation of which stakeholder needs it.

A Human-Centric Approach

Identifying the right intelligibility technique for specific stakeholders is not straightforward. Despite the efforts made by several researchers to come up with various intelligible methods, there has been very little evaluation to see if they help. This is because evaluations present many challenges in terms of the expertise needed to carry them out, the separation of effects on the model and the technique during analysis, and the mimicking of realistic settings needed for the participants during experimentation.

Researchers’ past efforts to analyse the effectiveness of intelligibility have shown that very few participants were able to accurately describe the tools. This gap results from the fact that users of these models are not just ‘passive consumers’ but ‘active partners‘ who form their own ‘mental models’ when interacting with these systems.  

Thus, for the better development of mental models among people, we need to design intelligibility techniques that facilitate to-and-fro communication between people and systems. This is because people’s mental models and systems are influenced by their social context, shared through communication. One way to do that is to achieve it through interactive explanations.

The researchers argue that beyond just achieving explainability in terms of the model, we should also aim to achieve intelligibility of other algorithm components, including datasets, training algorithms, performance metrics, and even errors. This can help uncover hidden assumptions and mitigate fairness issues. 

There is a great need for tight integration between the machine learning developers and HCI communities. “Intelligibility” has a strong history in HCI, and human-centred fields like psychology and anthropology will help develop a more comprehensive understanding of the techniques to be developed. For instance, they can use ‘communication theory‘ to improve interactive explanations mentioned above.

Share
Picture of Kashyap Raibagi

Kashyap Raibagi

Kashyap currently works as a Tech Journalist at Analytics India Magazine (AIM). Reach out at kashyap.raibagi@analyticsindiamag.com
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.