The increasing performance and integration of AI in the workplace have been accompanied by increasing complexity in the models. Models, by definition, are a subset of the real world. This means that there will be some relevant external context that is not a part of the model—this truth is expandable to AI models as well.
Human-in-the-loop systems are essentially about providing this context to AI models. This context could be in various terms. It includes removing bias from models so they adhere to ethical standards, providing situational awareness information to improve predictions or dispensing a final oversight before a decision is made. Context is also the AI system providing context to the human being for further action.
When this critical piece of information is missing, it leads to what is popularly known as “the black box problem” where the users don’t really understand how the model has churned data and arrived at a decision. Given the fact that algorithms are driving parts of our lives now with their use in driving cars, giving product recommendations, making investment decisions and even predicting employee attrition, it is becoming integral for stakeholders to understand and trust these AI operations.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
The key to designing a successful human-in-the-loop system is solving this challenge of two-way communication of context. The two-way communication includes:
Most people don’t realise that data science is a lot about storytelling, taking the stakeholders and customers through the analysis being present. But this is not possible when algorithmic outputs are shown as just products. My years of working on data science-based products have taught me that they sell when the customer can understand the product’s analysis. For instance, the key reason for the failure of IBM’s oncology technology was the lack of trust in it by hundreds of people. XAI helps the product explain the results of a multidimensional model with multimodal decision boundaries. For organisations of any size having data science teams, establishing a set of norms to ensure this two-way communication is essential.
How Organisations can instil Explainable AI
- Creating Values for your Analytics Team
Leaders need to probe their data science teams to ensure the model outputs can be explained to the stakeholders. There is a need to ensure that the company’s values include Responsible AI, including but not limited to, Ethical AI and Explainable AI.
Leaders must create a framework of high-level value statements with examples that can illustrate how these values translate into real-world choices for the analytics team.
A foolproof technique to instil the values of XAI is using ‘mind maps’ to present these corporate values into concrete guidelines on how to use AI. For instance, mapping the values to AI reputational risks in a financial company can lead to guidelines that suggest AI can be used to provide recommendations for clients. Still, it should always include a human in the loop while advising the client on financial vulnerability.
- Set Metrics to Evaluate AI
While value statements are a great starting point, it is essential to create hard boundaries with measurement metrics and definitions for assessing AI solutions. For instance, while AI can screen candidates for a job interview, metrics are essential to check the AI for gender or social biases.
Leaders need to create a culture of defining and setting metrics among the team members, aligning with the company values of using responsible AI.
- Training Employees to meet the Metrics
XAI and Ethical AI are fairly new concepts, making it important to train and educate your employees on meeting these guidelines. For instance, when Google developed their principles to define responsible AI, it supported it with tools and training for employees. Organisations can offer technical training modules to their employees on how bias creeps in, how to create simplified models, and how to master techniques to mitigate challenges.
- Applying Prevailing XML Techniques
A few XML techniques in the market can assist practitioners in unravelling the algorithms and complex bits on algorithms. Organisations can apply these techniques based on their product offerings and approach.
- LIME: Local Interpretable Model Agnostic Explanations – LIME is a learning interpretability model that is model agnostic. Being model agnostic, it works with almost every algorithm and is easy to implement.
- SHAP: Shapley Additive Explanations – SHAP provides the shaping value for each feature and has a Shapley value to calculate that by taking into account the conditional expectations. Its visualisations help in using simple methods to present the factors responsible for the AI model’s outcome. SHAY is limited to local interpretability and offers global interoperability.
- PDP: Partial Dependence Plots – PDPS presents the marginal effect of the features in question on the predicted outcome provided by the ML model. It helps illustrate the relationship between the variables and the prediction through visual and model agnostic techniques.
- Activation Atlases: A project by OpenAI and Google Research, Activation Atlases is a new way of visualising interaction between neurons to see what neural networks present. This provides a better understanding of the internal decision making processes in a black box.
Often, the biggest challenge with a biased AI model is the lack of accountability or the famous phrase ‘the algorithm made me do it’. Organisations need to train their workforce and upskill them to find the problem in their algorithms early on and nip it in the bud before it starts acting out.
This article is written by a member of the AIM Leaders Council. AIM Leaders Council is an invitation-only forum of senior executives in the Data Science and Analytics industry. To check if you are eligible for a membership, please fill the form here.