Is Explainability In AI Always Necessary?

“AI models do not need to be interpretable to be useful.”

Nigam Shah, Stanford

Interpretability in machine learning goes back to the 1990s when it was neither referred to as “interpretability” nor “explainability”. Interpretable and explainable machine learning techniques emerged from the need to design intelligible machine learning systems and understand and explain predictions made by opaque models like deep neural networks. 

In general, the ML community is yet to agree on a definition for explainability or interpretability. Sometimes it is even called understandability. Some define interpretability as “the ability to explain or to present in understandable terms to a human”. According to experts, interpretability depends on the domain of application and the target audience. Therefore, a one-size-fits-all definition might be infeasible or unnecessary. When concepts are used interchangeably, would it be wise to sacrifice the usability of a model for lack of comprehension? Where does one draw the line?

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

Despite deep learning’s popularity, many organisations are still comfortable using logistic regression, support vector machines and other conventional methods. Though model agnostic techniques can be used for traditional models, they are considered overkill for explaining kernel-based ML models. Model-agnostic methods can be computationally expensive and can lead to poorly approximated explanations. 

Stanford’s Nigam Shah, in a recent interview, touched on why explainability may not always be necessary. “We don’t fully know how most of them really work. But we still use them because we have convinced ourselves via randomized control trials that they are beneficial,” said Shah.

Explainability In Its Many Forms

Image credits: Stanford HAI blog

For any organisation, explainability becomes an issue when clients or other stakeholders come into the picture. The stakeholders fall into two categories:

  • One where explanations can be used as a one-off sanity check or shown to other stakeholders as reasoning for a particular prediction.
  • Explanations that can be used to garner feedback from the stakeholder regarding how the model ought to be updated to better align with their intuition.

It is generally believed that explainable methodologies can have broader advantages as they can be communicated to a wider audience and not just the immediate stakeholders. These methodologies help share the insights across the organisation without the need for a specialist in every scenario.

According to Shah, there are three main types of AI interpretability: 

  1. Explainability that focuses on how a model works.
  2. Causal explainability deals with the “whys and hows” of the model input and output.
  3. Trust-inducing explainability provides the information required to trust a model and confidently deploy it.  

So, it is important to know what type of explainability a data science team is targeting. That said, there is a chance that a use case might be a mix of all three. Such trade-offs and overlaps present a bundle of paradoxes to a decision-maker.

With increasing sophistication and completeness, the system becomes less understandable. “As a model grows more realistic, it becomes more difficult to understand,” said David Hauser at the recently concluded machine learning developers conference. According to Hauser, clients want the model to be understandable and realistic.This is another paradox a data scientist has to live with. He also stressed that understandable solutions give up on accuracy. For instance, network pruning one such technique which takes a hit on accuracy. The moment non-linearities or interactions are introduced, the answers become less intuitive. 

“Do you, as a user, care how the weather is predicted, and what the causal explanation is, as long as you know a day ahead if it is going to rain and the forecast is correct?”

We live in a world of an abundance of tools and services. Making the right choice leads to another paradox– Fredkin’s paradox, which states the more two alternatives seem similar, the harder it is to choose and the more time/effort required to decide.

Stanford professor Shah has also emphasised the Trust paradox. According to him, explanations aren’t always necessary. What can be worse is, sometimes they lead people to rely on a model even when it’s wrong. According to Shah, what engineers need from interpretability might not coincide with those of the model users whose focus is around causality and trust. Furthermore, explanations can also dent the chances of knowing what one really needs. 

Key Takeaways

In his interview with Stanford HAI, Shah shared:

  • AI models do not need to be interpretable to be useful.
  • Doctors at Stanford prescribe drugs on a routine basis, without fully knowing how most of them really work.
  • In health care, where AI models rarely lead to such automated decision making, an explanation may or may not be useful.
  • If it is too late to intervene for the clinician, what good are the explanations?
  • But, AI for job interviews, bail, loans, health care programs or housing, absolutely require a causal explanation.

One of the vital purposes of explanations is to improve ML engineers’ understanding of their models to refine and improve performance. Since machine learning models are “dual-use”, explanations or other tools could enable malicious users to increase capabilities and performance of undesirable systems. 

There is no denying that explanations allow model refinement. And, as we go forward, apart from the debugging and auditing of the models, organisations are looking at data privacy through the lens of explainability. Medical diagnosis or credit card risk estimation, making models more explainable, cannot come at the cost of privacy. Thus, sensitive information is another hurdle for explainability.

More Great AIM Stories

Ram Sagar
I have a master's degree in Robotics and I write about machine learning advancements.

Our Upcoming Events

Masterclass, Virtual
How to achieve real-time AI inference on your CPU
7th Jul

Masterclass, Virtual
How to power applications for the data-driven economy
20th Jul

Conference, in-person (Bangalore)
Cypher 2022
21-23rd Sep

Conference, Virtual
Deep Learning DevCon 2022
29th Oct

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM
MOST POPULAR

What can SEBI learn from casinos?

It is said that casino AI technology comes with superior risk management systems compared to traditional data analytics that regulators are currently using.

Will Tesla Make (it) in India?

Tesla has struggled with optimising their production because Musk has been intent on manufacturing all the car’s parts independent of other suppliers since 2017.