MITB Banner

How Does Understanding Of AI Shape Perceptions Of XAI?

A new study argues that AI background influences each group’s interpretations of XAI and that these differences exist through the lens of appropriation.

Share

XAI

One of the biggest challenges of machine learning and artificial intelligence is their inability to explain their decision to the users. This black box in AI renders the system largely impenetrable, making it difficult for scientists and researchers to understand why a certain system is behaving the way it is. In recent years, a new branch of explainable AI (XAI) has emerged, which the researchers are actively pursuing to establish user-friendly AI.

That said, how AI explanations are perceived is highly dependent on a person’s background in AI. A new study named “The Who in Explainable AI: How AI Background Shapes Perceptions of AI Explanations”, argues that AI background influences each group’s interpretations and that these differences exist through the lens of appropriation and cognitive heuristics. This study was authored by researchers of Cornell University, IBM, and the Georgia Institute of Technology.

The Study

Background in Al is a vital user characteristic in XAI because there is often a disparity between the developer and the end-user. Compared to the creator, the end-user is less likely to have an AI background. Despite that, XAI developers tend to design explanations for AI systems from the perspective of the developer. This creates a consumer-creator gap: how developers envision AI explanations to be interpreted and how users perceive them. The first step to bridge this gap is to understand how user characteristics like AI background impact it.

This paper attempts to do that by focusing on two groups — one with and the other without AI background. The authors found:

  • Both groups placed great trust in numbers; however, the reason and the degree of this trust varied. Interestingly, the group with an AI background trusted numerical representations more and seemed to be at a higher risk of getting misled by its presence.
  • Both the groups found different explanatory values beyond the usage for which the explanations were designed.
  • Both the groups seemed to appreciate ‘humanlike explanations’; however, they had different interpretations of what counts as one. 

To arrive at these results, authors adopted a mixed-method study where they adopted three types of AI-generated explanations:

  • Natural language with justifications: explaining the why behind the action,
  • Natural language without justification: describing what the action was, and
  • Numbers: This determined agents’ actions.

Further, the perceptions were recorded along five dimensions — intelligence, confidence, understandability, second chance, and friendliness. In this paper, researchers focused on a specific type of explanation generation technique called rationale generation. It is a process of producing a natural language explanation for an agent’s behaviour in a way a human would perform the behaviour and verbalise in their monologue. These explanations can be any modality, but the rationales are natural language-based; this makes it especially accessible for non-AI experts.

Explaining Explainable AI

The origin of explainable AI can be traced back to the 1980s. The field of XAI has been undergoing massive changes in the current times due to the emergence of several complex deep learning models. Explainability is one of the major hurdles for companies adopting AI. According to FICO, 65 per cent of the surveyed employees couldn’t explain how AI model decisions or predictions are made.

While the research community is yet to arrive at a common definition of what XAI actually means, it shares the common goal of making AI systems’ decisions or behaviours understandable by people. Current work makes simpler models such as linear regression and decision-tree directly interpretable but at the cost of their performance. The focus is now slowly getting shifted towards developing algorithms that open the “opaque-box and allow ‘under the hood’ inspection without sacrificing performance.”

A new method called the explanation generation method has been gaining flavour. The aim of this technique is not to make the system directly human-understandable. Instead, these are post-hoc techniques that can be applied after model building. These methods rely on distilling simpler models from the input and output or deriving the model’s meta-knowledge. While this method does introduce a loss of scrutability, it allows the flexibility to make any model explainable. This method has become very popular and is also being applied to transforming AI plans to natural language, intelligent tutoring systems, transforming simulation logs to explanations, and translating multi-agent communication policies to natural language.

Share
Picture of Shraddha Goled

Shraddha Goled

I am a technology journalist with AIM. I write stories focused on the AI landscape in India and around the world with a special interest in analysing its long term impact on individuals and societies. Reach out to me at shraddha.goled@analyticsindiamag.com.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.