There has been a lot of talk about making machine learning more explainable so that the stakeholders or the customers can shed the scepticism regarding the traditional black-box methodology. However, there wasn’t much discussion about how to deploy these systems into the pipelines. So, in order to find out how it is being implemented, a group of researchers conducted a survey.
In the next section, we look at a few findings and practices for deploying as recommended by the researchers at Carnegie Mellon University, who published a work in collaboration with top institutes. In their survey conducted with various domain experts, the authors have tried to address questions such as:
- What are the pain points in deploying ML models and does explainability help?
- What type of explanations are used (e.g., feature-based, sample-based, counterfactual, or natural language)?
- How does an organisation decide when and where to use model explanations?
What Practitioners Have To Say
During their survey, the researchers have come across some concerns such as model debugging, model monitoring and transparency among many others during the interviews that they have conducted with organisations as part of their work.
The study found that most data scientists struggle with debugging poor model performance. Identifying poor performance, engineering new features, dropping redundant features, and gathering more data to improve model performance is one of the crucial tasks.
“Feature A will impact feature B, [since] feature A might negatively affect feature B — how do I attribute [importance in the presence of] correlations?“
In general, data scientists that were interviewed are of the opinion that explainable methodologies can have broader advantages as it can be communicated to a wider range of audience and not only to the immediate stakeholders. In short, it helps in sharing the insights across the organisation without the need for the assistance of a specialist in every scenario.
In the case of financial organisations where the regulatory requirements are usually on a higher scale relatively, the deployed ML models must go through an internal audit. Data scientists get these models reviewed by internal risks and legal teams.
“Figuring out causal factors is the holy grail of explainability”
Apart from the debugging and auditing of the models, the data scientists also acknowledged the significance of considering data privacy in the context of explainability. In the course of making models more explainable, we cannot put privacy at stake. Be it medical diagnosis or credit card risk estimation, the amount of personal information that is processed can be very sensitive, and this is being addressed in the organisations who are serious about the implementation of explainability.
One of the vital purposes of explanations is to improve ML engineers’ understanding of their models in order to help them refine and improve performance. Since machine learning models are “dual-use”, the authors suggest that we should be aware that in some settings, explanations or other tools could enable malicious users to increase capabilities and performance of undesirable systems.
Recommendations From The Industry
For any organisation, it is important to identify the stakeholders; those are affected by the model outputs. Stakeholders have different needs for explainability. These stakeholders fall into two categories:
Stay ConnectedGet the latest updates and relevant offers by sharing your email.
- Static Consumption: Will the explanation be used as a one-off sanity check for some stakeholders or shown to other stakeholders as reasoning for a particular prediction?
- Dynamic Model Updates: Will the explanation be used to garner feedback from the stakeholder as to how the model ought to be updated to better align with their intuition?
The first step to make things more explainable, wrote the authors, is to make the models simpler. Although deep learning has gained popularity in recent years, many organisations still use classical ML techniques (e.g., logistic regression, support vector machines). Model agnostic techniques can be used for traditional models, but are “likely overkill” for explaining kernel-based ML models, according to a research scientist, since model-agnostic methods can be computationally expensive and lead to poorly approximated explanations.
This enables the segmentation of the processes. However, complex models are more flexible and can be used to scale in real-time.
In this work, the authors have tried to address the other end of the pipeline, which is rarely discussed — deployment. The technical debt and the challenges, which are specific to use cases, pile up in the long run and can make imbibing new strategies such as explainable models difficult. This work shows that organisations have already started taking explainability seriously, and here are a few key takeaways from this paper:
- Feature-level interpretations, feature attributions, or saliency maps, this method is by far the most widely used and most well-studied explainable technique
- Feature importance is not shown to end-users but is used by machine learning engineers as a sanity check
- Organisations are interested in counterfactual explanation solutions since the underlying method is flexible and such explanations are easy for end-users to understand
Although domain knowledge would clear the initial hiccups, say the authors, the question of who gets to pick, still looms large.
Know more about this work here.
If you loved this story, do join our Telegram Community.
Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.
What's Your Reaction?
I have a master's degree in Robotics and I write about machine learning advancements. email:firstname.lastname@example.org