Council Post: Building Human-In-The-Loop Systems

Building Human-In-The-Loop Systems

Design by Building Human-In-The-Loop Systems

The increasing performance and integration of AI in the workplace have been accompanied by increasing complexity in the models. Models, by definition, are a subset of the real world. This means that there will be some relevant external context that is not a part of the model—this truth is expandable to AI models as well. 

Human-in-the-loop systems are essentially about providing this context to AI models. This context could be in various terms. It includes removing bias from models so they adhere to ethical standards, providing situational awareness information to improve predictions or dispensing a final oversight before a decision is made. Context is also the AI system providing context to the human being for further action. 

When this critical piece of information is missing, it leads to what is popularly known as “the black box problem” where the users don’t really understand how the model has churned data and arrived at a decision. Given the fact that algorithms are driving parts of our lives now with their use in driving cars, giving product recommendations, making investment decisions and even predicting employee attrition, it is becoming integral for stakeholders to understand and trust these AI operations. 

Subscribe to our Newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

The key to designing a successful human-in-the-loop system is solving this challenge of two-way communication of context. The two-way communication includes:




Most people don’t realise that data science is a lot about storytelling, taking the stakeholders and customers through the analysis being present. But this is not possible when algorithmic outputs are shown as just products. My years of working on data science-based products have taught me that they sell when the customer can understand the product’s analysis. For instance, the key reason for the failure of IBM’s oncology technology was the lack of trust in it by hundreds of people. XAI helps the product explain the results of a multidimensional model with multimodal decision boundaries. For organisations of any size having data science teams, establishing a set of norms to ensure this two-way communication is essential. 

Organisations need to adopt XAI products to prevent cases like Microsoft’s racist Twitter bot or Amazon’s sexist hiring system.  

How Organisations can instil Explainable AI

  1. Creating Values for your Analytics Team

Leaders need to probe their data science teams to ensure the model outputs can be explained to the stakeholders. There is a need to ensure that the company’s values include Responsible AI, including but not limited to, Ethical AI and Explainable AI. 

Leaders must create a framework of high-level value statements with examples that can illustrate how these values translate into real-world choices for the analytics team.

A foolproof technique to instil the values of XAI is using ‘mind maps’ to present these corporate values into concrete guidelines on how to use AI. For instance, mapping the values to AI reputational risks in a financial company can lead to guidelines that suggest AI can be used to provide recommendations for clients. Still, it should always include a human in the loop while advising the client on financial vulnerability.

  1. Set Metrics to Evaluate AI 

While value statements are a great starting point, it is essential to create hard boundaries with measurement metrics and definitions for assessing AI solutions. For instance, while AI can screen candidates for a job interview, metrics are essential to check the AI for gender or social biases. 

Leaders need to create a culture of defining and setting metrics among the team members, aligning with the company values of using responsible AI. 

  1. Training Employees to meet the Metrics

XAI and Ethical AI are fairly new concepts, making it important to train and educate your employees on meeting these guidelines. For instance, when Google developed their principles to define responsible AI, it supported it with tools and training for employees. Organisations can offer technical training modules to their employees on how bias creeps in, how to create simplified models, and how to master techniques to mitigate challenges. 

  1. Applying Prevailing XML Techniques 

A few XML techniques in the market can assist practitioners in unravelling the algorithms and complex bits on algorithms. Organisations can apply these techniques based on their product offerings and approach.  

  • LIME: Local Interpretable Model Agnostic Explanations – LIME is a learning interpretability model that is model agnostic. Being model agnostic, it works with almost every algorithm and is easy to implement. 
  • SHAP: Shapley Additive Explanations – SHAP provides the shaping value for each feature and has a Shapley value to calculate that by taking into account the conditional expectations.  Its visualisations help in using simple methods to present the factors responsible for the AI model’s outcome. SHAY is limited to local interpretability and offers global interoperability. 
  • PDP: Partial Dependence Plots – PDPS presents the marginal effect of the features in question on the predicted outcome provided by the ML model. It helps illustrate the relationship between the variables and the prediction through visual and model agnostic techniques. 
  • Activation Atlases: A project by OpenAI and Google Research, Activation Atlases is a new way of visualising interaction between neurons to see what neural networks present. This provides a better understanding of the internal decision making processes in a black box.

Often, the biggest challenge with a biased AI model is the lack of accountability or the famous phrase ‘the algorithm made me do it’. Organisations need to train their workforce and upskill them to find the problem in their algorithms early on and nip it in the bud before it starts acting out. 

This article is written by a member of the AIM Leaders Council. AIM Leaders Council is an invitation-only forum of senior executives in the Data Science and Analytics industry. To check if you are eligible for a membership, please fill the form here.

Ashwin Swarup
Ashwin is the VP of Data Science and Data Engineering at Digité Inc. He brings with him more than a decade’s experience in leading international data science teams. His focus is on building data science products that can scale for enterprise-level AI applications. Currently he is working to realize the power of XAI in decision sciences and building human-in-the-loop systems to simplify complex business processes for CXO offices.

Download our Mobile App

MachineHack

AI Hackathons, Coding & Learning

Host Hackathons & Recruit Great Data Talent!

AIM Research

Pioneering advanced AI market research

Request Customised Insights & Surveys for the AI Industry

The Gold Standard for Recognizing Excellence in Data Science and Tech Workplaces

With Best Firm Certification, you can effortlessly delve into the minds of your employees, unveil invaluable perspectives, and gain distinguished acclaim for fostering an exceptional company culture.

AIM Leaders Council

World’s Biggest Community Exclusively For Senior Executives In Data Science And Analytics.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.