Now Reading
The Need for Interpretable Machine Learning Solutions

The Need for Interpretable Machine Learning Solutions

  • We can reach a state of Trustworthy AI when we can explain the deviations in the output. It plays a critical role in developing trust in the AI system.
Analytics Industry

The ever-increasing ubiquitous index of Artificial Intelligence needs no special mention. The invasion has become so tacit that it is now deeply intertwined with our day-to-day lives. But with the advancements in terms of more sophisticated algorithms, more awareness and higher compute power, comes the need of assessing the impact and the scale at which things can go wrong. 

Let us leave AI aside for a moment and think purely from a technological point of view. Ever since the beginning, technology broadly had only one goal i.e., to improve human lives. In other words, serving humanity lies at the core of it. 

Access Free Data & Analytics Summit Videos>>

The same applies to AI and we must not wait for the doomsday to understand how critical it is to interpret the ML model and stop treating it as a black box. 

AI finds application in a lot of critical high-stake decisions, healthcare, loan, criminal justice to name a few

Graphical user interface, application

Description automatically generated

Let’s discuss them in detail:

  • Healthcare: The patient shares the symptoms with the doctor and receives either the medical prescription or further diagnostics by the doctor after a series of follow-up questions. If the same prescription is advised by a machine, would you accept it with the same degree of confidence? One such example is a graphical model-based application where the parents are asked a series of questions by the machine and are advised about the key probable underlying disease in the child patient.
    • What would a non-ML-aware person think of such machine suggested medical results – “Out in the population of several patients with varying characteristics, there might exist a group of patients that exhibit similar symptoms to mine and hence the machine must have suggested the prescribed set of follow-ups”. This might work well in general, but patients tend to prefer (read trust) specialized treatment suggested by the doctors based on their personal interaction.
  • Court and law: Can and should a machine have the emotional compass to decide 
    • when to be lenient for first-time offenders, or 
    • how to stop replicating/amplifying the existing societal bias of giving harsh punishment to a certain color 
  • Banking: Banks might reject a loan application if the applicant’s credit history is not strong, or salary is below a particular range. If the reasons are quoted, the applicant might work on it before applying next time. But what if the application gets rejected because the score outputted by the algorithm is below a particular threshold? 

The False Positives and False Negatives appear only in some numbers in the diagonal of a confusion matrix. But depending on the criticality of the machine output, someone’s life depends on it. Be it the algorithm that favored certain color patients to receive the high-quality treatment or a gender-biased hiring algorithm, there are multiple cases where the ethics were not honored, and humanity suffered from unexplained and biased outcomes.

If so critical, then why AI?

You all must be thinking that if the applications are so high in the criticality index i.e. the cost associated with every wrong prediction is so high, then why do we even care to use AI and not follow the status quo. Let’s go deep and analyze what our options are.  

Most of such tasks are carried out by humans today, but all do not work with 100% accuracy. Let’s say, a supply chain planner analyses a vast number of variables to highlight the probable delay in shipment. However, when asked to quantify the process, effectively a limited set of factors are considered before arriving at a decision. More so, the process is not consistent across different planners and ends up getting affected with conscious and unconscious bias. So, if humans are also not able to explain multiple factors behind a process and cannot guarantee 100% accuracy, isn’t resorting to explainable machines to aid humans’ decision-making a good bet?

Not just this, think of using machines to create super-humans. All humans working in a particular profession are not experts, but some are. Won’t it be a good idea to put machines to learn from these professional experts and assist the not-so-experts in their fields? It also works on the consensus i.e., if the machine suffers due to the input labeled from several human labelers, not all labelers will make the same mistake. In that process, the machine is learning the consensus and extracting the good signal from such data thereby gaining an edge over the humans.

Miller’s magical number:

Now that we understand why interpretability is important, let us also understand that it is not limited to outputting the top N features from global feature importance. The end-user should be able to make sense out of it and based on Miller’s magical number it is difficult to process information beyond 7 variables (plus/minus 2).

“as we add more variables to the display, we increase the total capacity, but we decrease the accuracy for any particular variable. In other words, we can make relatively crude judgments of several things simultaneously.”

See Also

Falling Rule Lists:

Put simply, it is difficult to understand 20 different explanations coming from a complex model. This points to very interesting research done by Prof. Cynthia Rudin called Falling Rule Lists. Based on a Bayesian Framework, it provides an ordered list of if-then rules and does not rely on traditional greedy decision tree learning methods.


Description automatically generated
Image Source: 

One big takeaway from the FRL is that it does not complicate the interpretation by throwing a lot of variables that are difficult to analyze for a particular prediction output. Besides, it makes the interpretation very close to the real world, and hence easy to implement and trust.

A framework like FRL provides the much-needed visibility into the internal working of the model output unlike most of the black box models. Black-box models, as the name suggests, make it difficult to trace the path the multiple variables have taken to arrive at the output prediction. The concerns also perforate to the level where it becomes difficult to trust the data, e.g. issues like noise, typographical errors, mislabels become difficult to detect.

The expectation from AI-enabled applications:

The machine learns the associations in the form of a learned function from the training data. The new unseen data is passed through this learned function to generate the prediction aka output. What comes now is the most critical part:

  • Can we explain why the machine outputted certain predictions e.g. why a particular loan got rejected?
  • What should be the attribute that can change this prediction from class A to class B e.g. what factor should the applicant work on to get his loan approved?
  • If the loan is mistakenly rejected, how to identify the mistake and what is the corrective action?
  • In short, when can I trust the machine output and know the corner cases where it fails?

Closing remarks:

We can reach a state of Trustworthy AI when we can explain the deviations in the output. It plays a critical role in developing trust in the AI system. In order to achieve it, identification of what can go wrong i.e. the vulnerabilities of the AI ecosystem needs to be well studied and taken care of. This will help us build a robust and reliable AI solution that is unbiased, abides the regulatory requirements, and is trusted with its predictions and output to improve the quality of human life.

What Do You Think?

Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.
Join our Telegram Group. Be part of an engaging community

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top