In my recent article titled Machine Learning: Enhancing Probability or Predictability? , I mentioned briefly about the biases pushed unconsciously to the Machine Learning (ML) models/algorithms. Adopting an agnostic approach can greatly help in controlling such biases. Biased ML models, if any, are the outcomes of training data (labeled or actual data) used for training the model.
In a computer-based environment, an agnostic approach is the one which is interoperable across the systems and there are no prejudices towards using a specific technology, model, methodology or data. An agnostic approach is not only towards the aforesaid factors; but also, towards the business processes and practices. We will discuss in subsequent paras how this agnostic approach provides the independence, technical agility and flexibility for building sustainable ML models.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
There are a lot of open source and proprietary ML technologies or more specifically software libraries comprising of Application programming interfaces (APIs) which are available on the internet. I am not going to name and list them down over here as they can be searched easily 😊. The important point over here is to understand which solution will be the most efficient and effective one for the problem we are trying to solve by not restricting ourselves to one type of technology only.
The technology stack need not necessarily comprise of a large number of technologies, but the technologies capable of supporting iterative nature of the machine learning process and supporting the evolution of the model based on the training data set. It is also important to understand that the technology must support data cleaning, feature engineering, model training, model scoring and evaluating the model performance. At the same time, this approach does not restrict us to evaluate different technologies for deciding the most optimal solution.
An ideal ML model is one which can answer the business problem and explain why a specific decision was taken in a specific circumstance. Model-agnostic interpretation methods are far more flexible as compared to model-specific methods. The model-agnostic approach focuses on the following aspects of flexibility:
- Model flexibility– The interpretation method can work with any machine learning model.
- Explanation flexibility– There is no limitation on using a certain form of explanation only.
- Representation flexibility– There is a flexibility to use a different feature representation for the model being explained.
Here, it is very relevant to understand the importance of interpretability as it explains not only “what” part but also the “why” part of the prediction. Knowing the “why” can help us learn more about the problem, the data and the reason why a model might fail. Some models may not require explanations because they are used in a low-risk environment, such as purchasing recommendations in an online shopping experience based on past buying patterns. But explanations will be required in a high-risk environment, such as fraud detection in an online transaction based on the history of transactions or abnormality observed as a result of the possible high valued transaction.
Interpretability is also a useful debugging tool for detecting bias in machine learning models. There could be a situation that the model trained for automatic approval or rejection of Income Tax returns discriminates against the type of assessee – Corporate or Individual. Whereas, the goal should be to ensure tax compliance, expediting tax collections and refunds, if any. At the same time, the Income Tax Department is also obliged not to discriminate assessees based on certain classification or labelling.
The methodology-agnostic approach focuses on aligning the tools, technologies, execution methodology and the culture with the business model and to the needs of the business and not vice-versa. A typical business model will describe how the organization will create, deliver and sustain the long-term value. The methodology for building an ML model must also keep in mind the ease of development, testing, deployment, training, support and changing the model when required. Here, I am not trying to be prescriptive and advocating either Agile, Waterfall or Hybrid approach. But the focus is on value-driven delivery and there needs to be a cushion for learning and experimentation as it is not necessary that you get the model right in the first attempt and release it.
A data-agnostic approach signifies the system’s ability to process the data collected from heterogeneous data sources and convert them into actionable insights. The ML model should be designed in such a way that it can process unstructured data seamlessly as it processes structured data. There are various types of datasets involved in any ML model for making data-driven predictions or decisions.
- Training dataset– Dataset of examples used for learning to build predictive models by identifying patterns.
- Test dataset– Dataset of examples used to assess the likely future performance and how well the machine can predict new outcomes based on its training.
- Validation dataset– Dataset used for finding and optimizing the best model to solve a given problem.
- Holdout dataset- Part of the original dataset can be set aside and used as a test set.
- Cross-validation dataset– Dataset which can be repeatedly split into a training dataset and a validation dataset
The creation of the above types of datasets will require a holistic approach for data collection, formatting, cleaning, decomposition, normalizing and categorizing along with addressing possible concerns that could arise due to model overfitting or underfitting.
Last but not the least, the ML model needs to be industry-agnostic in nature, i.e., it should work in any type of industry notwithstanding the specifics of each industry, such as Banking, Finance, Retail, Manufacturing, Insurance, etc. This approach allows the flexibility of implementation, extensibility and scalability.
It is the best practice to develop the scalable ML models which can function on their own and do not have interdependencies on a specific technology, underlying interpretable model, methodology or the data. Any ML model which passes these criteria will be more adaptable towards changes and will provide better prediction accuracy due to enough training and learning enabled by the agnostic approaches. Another benefit of an agnostic approach is the better explainability of such models along with democratization of their adoption and benefits realization.
- Interpretable Machine Learning- A Guide for Making Black Box Models Explainable by Christoph Molnar Dated: 2020-03-09 https://christophm.github.io/interpretable-ml-book/index.html
- Wikipedia links
Views expressed in this article are my own and may not necessarily be of my employer.