Guide to Multi-Class Classification

Like binary classification, something like predicting 1 or 0, the patient is diabetic or not diabetic, means predicting two classes,  is not the current world scenario. Nowadays, there are N number of categories or classes present if you talk about a particular domain. So to perform classification tasks here, all predictive classification models do not support multi-class classification like Logistic regression, support Vector Machine as those are designed to perform Binary classification and do not support classification tasks more than two classes. In contrast, Decision tree classification, K-nearest neighbour, Naive Bayes Classification and neural network-based models give superior performance for Multi-Class Classification.

Difference between Binary and Multi-Class Classification

What is Multi-Class Classification

A classification problem including more than two classes, such as classifying a series of dog breed photographs which may be a pug, bulldog, or teabetain mastiff. Multi-class classification assumes that each sample is assigned to one class, e.g. a dog can be either a breed of pug or a bulldog but not both simultaneously.

Many approaches are used to solve this problem, such as converting the N number of classes to N number binary columns representing each class. By doing so, we can use a binary classifier for Multi Classification problems. Pandas built-in get_dummies attribute provides you with   this functionality to get a binary representation of each class. 

AIM Daily XO

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

There are other methods for multi-classification problems. Jason Brownlee has introduced the One-vs-Rest and One-vs-One Strategies. You can read about it here.   

Here we will be using K-Nearest Neighbour with tuning its hyperparameter such as n_neighbors, metric. Elbow method will be used for choosing the optimal value of K by showing a graphical representation of errors w.r.to different values of K.


Download our Mobile App



Let’s understand how to solve Multi Classification problems with practical use case by Python code implementation.   

Code Implementation of Multi-Class Classification

Importing all dependencies:
 import numpy as np
 import pandas as pd
 import matplotlib.pyplot as plt
 from sklearn.model_selection import train_test_split
 from sklearn.neighbors import KNeighborsClassifier
 from sklearn.datasets import load_digits
 from sklearn.metrics import plot_confusion_matrix 

This article focuses on performing multi-classification uniquely to avoid class imbalance and uncertainty in the dataset; I have used the built-in digits dataset provided by sklearn. Which has 64 different pixel values of particular digits as input feature columns and targeted digits range from 0 to 9. 

Loading the dataset 
 load_data = load_digits()
 dataset = (pd.DataFrame(load_data.data,columns = load_data.feature_names))
 dataset[['digits']] = load_data.target.reshape(-1,1)
 dataset.head()
 
Output
Selecting input-output features and train test split:

As the number of instances is around 1800, only to ensure more data is used for training here, we have set the test size to be 25%.

 x = dataset.drop(['digits'],axis=1)
 y = dataset.digits
 x_train, x_test, y_train, y_test = train_test_split(x, y, random_state = True, test_size = 0.25) 
Elbow method to calculate K value graphically: 

To set the optimum value of K here, I have used the elbow method; the plot below shows the error rate against 100 values of K. Error rate is calculated by taking the mean of where the predicted value is not equal to the actual value.

 error_rate=[]
 for i in range(1,100):
             knn = KNeighborsClassifier(n_neighbors=i)
             model = knn.fit(x_train,y_train)
             pred_i = knn.predict(x_test)
             error_rate.append(np.mean(pred_i != y_test))
 plt.figure(figsize=(13,8))
 plt.plot(range(1,100), error_rate, linestyle = 'dotted', marker = 'o',color = 'g')
 plt.xlabel('K value')
 plt.ylabel('Error Rate')
 plt.title('K value Vs Error Rate')
 plt.show() 

From the above graph, we can see that from the initial point to the K values equals to 8 and 9, the error rate decreases; afterwards, it tends to increase and saturate for some K values, and this pattern is being followed throughout the graph. 

From the graph, we can see that for K value equal 8 and 9, the error given by the model is nearly zero, so we can choose K value as 8 or 9 for our final model.

Final Model and performance metrics :
 model = KNeighborsClassifier(n_neighbors=8).fit(x_train,y_train)
 pred = model.predict(x_test)
 plot_confusion_matrix(model,x_test,y_test,cmap=plt.cm.Blues)
 print(classification_report(y_test,pred))  

Almost all the samples are predicted correctly. The classification report has also given evidence that our model is perfectly fit for the dataset. 

The Google Colab notebook is present here for your reference. 

Endpoints

In this article, we discussed how to optimize the performance of classification algorithms by tuning their hyperparameter. In KNN, finding K value by trial and error method is a rigorous task because even a slight change in the value of K can degrade the model’s performance, as you can see in the above graph and hence it is not a good practice to approach in that way. The Elbow method is pretty simple to implement and powerful to get the accurate value of K where the model gives its best performance and helps limit problems due to overfitting.     

Sign up for The Deep Learning Podcast

by Vijayalakshmi Anandan

The Deep Learning Curve is a technology-based podcast hosted by Vijayalakshmi Anandan - Video Presenter and Podcaster at Analytics India Magazine. This podcast is the narrator's journey of curiosity and discovery in the world of technology.

Vijaysinh Lendave
Vijaysinh is an enthusiast in machine learning and deep learning. He is skilled in ML algorithms, data manipulation, handling and visualization, model building.

Our Upcoming Events

24th Mar, 2023 | Webinar
Women-in-Tech: Are you ready for the Techade

27-28th Apr, 2023 I Bangalore
Data Engineering Summit (DES) 2023

23 Jun, 2023 | Bangalore
MachineCon India 2023 [AI100 Awards]

21 Jul, 2023 | New York
MachineCon USA 2023 [AI100 Awards]

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR

Council Post: Evolution of Data Science: Skillset, Toolset, and Mindset

In my opinion, there will be considerable disorder and disarray in the near future concerning the emerging fields of data and analytics. The proliferation of platforms such as ChatGPT or Bard has generated a lot of buzz. While some users are enthusiastic about the potential benefits of generative AI and its extensive use in business and daily life, others have raised concerns regarding the accuracy, ethics, and related issues.