What Explainable AI Cannot Explain And What Can Be Done

The effectiveness of a machine learning model is often marred with its inability to explain its decisions to the users. To address this problem, a whole new branch of explainable AI (XAI) has emerged, and the researchers are actively pursuing different methodologies to establish a user-friendly AI.  But what about the existing XAI approaches? Are they any good? Where do they fail? To answer these questions, a team of researchers from UC Berkeley and Boston University have investigated the challenges and possible solutions. Their exploration led to a novel technique that will be discussed in the last section of this article. Explaining The Inexplicable To illustrate the inexplicability, one of the authors, Alvin Wan, in a blog, has used the example of saliency maps and dec
Subscribe or log in to Continue Reading

Uncompromising innovation. Timeless influence. Your support powers the future of independent tech journalism.

Already have an account? Sign In.

📣 Want to advertise in AIM? Book here

Picture of Ram Sagar
Ram Sagar
I have a master's degree in Robotics and I write about machine learning advancements.
Related Posts
AIM Print and TV
Don’t Miss the Next Big Shift in AI.
Get one year subscription for ₹5999
Download the easiest way to
stay informed