5 Important Techniques To Process Imbalanced Data In Machine Learning

Imbalance data distribution is an important part of machine learning workflow.  An imbalanced dataset means instances of one of the two classes is higher than the other, in another way, the number of observations is not the same for all the classes in a classification dataset. This problem is faced not only in the binary class data but also in the multi-class data.

In this article, we list some important techniques that will help you to deal with your imbalanced data.

1| Oversampling

AIM Daily XO

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

This technique is used to modify the unequal data classes to create balanced datasets. When the quantity of data is insufficient, the oversampling method tries to balance by incrementing the size of rare samples.

A primary technique used in oversampling is SMOTE (Synthetic Minority Over-sampling TEchnique). In this technique, the minority class is over-sampled by producing synthetic examples rather than by over-sampling with replacement and for each minority class observation, it calculates the k nearest neighbours (k-NN). But this technique is limited to an assumption that local space between any two positive instances belongs to the minority class, which may not always true in the case when the training data is not linearly separable. Depending upon the  amount of oversampling required, neighbours from k-NN are randomly chosen.

Download our Mobile App


  • No loss of information
  • Mitigate overfitting caused by oversampling.

To take a deep dive into the SMOTE technique. Click here.

2| Undersampling

Unlike oversampling, this technique balances the imbalance dataset by reducing the size of the class which is in abundance. There are various methods for classification problems such as cluster centroids and  Tomek links. The cluster centroid methods replace the cluster of samples by the cluster centroid of a K-means algorithm and the Tomek link method removes unwanted overlap between classes until all minimally distanced nearest neighbours are of the same class.


  • Run-time can be improved by decreasing the amount of training dataset.
  • Helps in solving the memory problems

To learn more about undersampling, click here.

3| Cost-Sensitive Learning Technique

The Cost-Sensitive Learning (CSL) takes the misclassification costs into consideration by minimising the total cost. The goal of this technique is mainly to pursue a high accuracy of classifying examples into a set of known classes. It is playing as one of the important roles in the machine learning algorithms including the real-world data mining applications.

In this technique, the costs of false positive(FP), false negative (FN), true positive (TP), and true negative (TN) can be represented in a cost matrix as shown below where C(i,j) represents the  misclassification cost of classifying an instance and also “i” the predicted class and “j” is the actual class. Here is an example of cost matrix for binary classification.

To deep dive into CSL technique, click here.


  • This technique avoids pre-selection of parameters and auto-adjust the decision hyperplane.

4| Ensemble Learning Techniques

The ensemble-based method is another technique which is used to deal with imbalanced data sets, and the ensemble technique is combined the result or performance of several classifiers to improve the performance of single classifier. This method modifies the generalisation ability of individual classifiers by assembling various classifiers. It mainly combines the outputs of multiple base learners. There are various approaches in ensemble learning such as Bagging, Boosting, etc.

Bagging or Bootstrap Aggregating tries to implement similar learners on a smaller dataset and then takes a mean of all the predictions. The Boosting (Adaboost) is an iterative technique that rectifies the weight of an observation depending on the last classification. This method decreases the bias error and builds strong predictive models.


  • This is a more stable model
  • The prediction is better

To learn more about this technique, click here.

5| Combined Class Methods

In this type of method, various methods are fused together to get a better result to handle imbalance data. For instance, like SMOTE can be fused with other methods like MSMOTE (Modified SMOTE), SMOTEENN (SMOTE with Edited Nearest Neighbours), SMOTE-TL, SMOTE-EL, etc. to eliminate noise in the imbalanced data sets. However, the MSMOTE is the modified version of SMOTE which classifies the samples of minority classes into three groups such as security samples, latent nose samples, and border samples.


  • No loss of useful information
  • Good generalisation

Sign up for The Deep Learning Podcast

by Vijayalakshmi Anandan

The Deep Learning Curve is a technology-based podcast hosted by Vijayalakshmi Anandan - Video Presenter and Podcaster at Analytics India Magazine. This podcast is the narrator's journey of curiosity and discovery in the world of technology.

Ambika Choudhury
A Technical Journalist who loves writing about Machine Learning and Artificial Intelligence. A lover of music, writing and learning something out of the box.

Our Upcoming Events

24th Mar, 2023 | Webinar
Women-in-Tech: Are you ready for the Techade

27-28th Apr, 2023 I Bangalore
Data Engineering Summit (DES) 2023

23 Jun, 2023 | Bangalore
MachineCon India 2023 [AI100 Awards]

21 Jul, 2023 | New York
MachineCon USA 2023 [AI100 Awards]

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox

Council Post: The Rise of Generative AI and Living Content

In this era of content, the use of technology, such as AI and data analytics, is becoming increasingly important as it can help content creators personalise their content, improve its quality, and reach their target audience with greater efficacy. AI writing has arrived and is here to stay. Once we overcome the initial need to cling to our conventional methods, we can begin to be more receptive to the tremendous opportunities that these technologies present.