Generating Suitable ML Models Using LazyPredict Python Tool

While building machine learning models we are not sure which algorithm should work well with the given dataset, hence we end up trying many models and keep iterating until we get proper accuracy. Have you ever thought about getting all the basic algorithms at once to predict for model performance

LazyPredict is a module helpful for this purpose. LazyPredict will generate all the basic machine learning algorithms’ performances on your model. Along with the accuracy score, LazyPredict provides certain evaluation metrics and the time taken by each model.

Lazypredict is an open-source python package created by Shankar Rao Pandala. Development and contribution to this are still going. 


Sign up for your weekly dose of what's up in emerging technology.

Properties of LazyPredict:

  1. As of now, it is only based on Supervised learning algorithms(Regression and Classification)
  2. Compatible with python version 3.6 and above.
  3. Could be run on Command Line Interface(CLI). 
  4. Fast in predicting as all the basic model performances for the dataset is given at once.
  5. Has an inbuilt Pipeline to scaling and transform the data and handle missing values and change categorical data to numeric. 
  6. Provides evaluation metrics on individual models.
  7. Shows the time consumed by each model to build.

In this article, I’ll be discussing how to implement LazyPredict for regression and classification models with just a few lines of code.

Download our Mobile App

Installing LazyPredict:

This is very simple using pip command :

pip install lazypredict

LazyPredict for Regression

I’ll be using the Mercedes dataset from Kaggle which is a regression problem to predict the time a car will take to spend on testing each feature.

Dataset Link:

The dataset presents custom features of the cars(X0 to X385) associated with a unique ID and target variable y is the time (in seconds) the car took to pass testing for each variable.

We import the LazyPredict Supervised model wherein LazyRegressor class is present.

import pandas as pd
import lazypredict
from sklearn.model_selection import train_test_split
from lazypredict.Supervised import LazyRegressor
df = pd.read_csv('/content/drive/My Drive/datasets/mercedes.csv')
X = df.drop(['y'], axis=1)
Y = df['y']
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2, random_state = 0)
reg = LazyRegression(verbose=0,ignore_warnings=True,
custom_metric=None, predictions=True)
models,pred =, X_test, y_train, y_test)

Dataset is split into dependent and independent variables where independent variables are stored in X and dependent variables in Y. Then training 80% of data and testing 20%.

models variable contain all the models with two metric values and pred contains the predictions.

Parameters used in LazyRegressor():

  • verbose  – by default 0
  • ignore_warning – by default set to True, to avoid warning messages for any kind of discrepancy in generating models
  • custom_metric – by default None, can be set to custom metrics if defined
  • predictions – by default False, if set to True it’ll return predictions based on each model.
  • random_state by default is set to 42.

Note that all of these parameters are optional, if not defined they will take the default values. 

models output:

Total of 39 models.

For regression models in LazyPredict there are two evaluation metrics available RMSE(Root Mean Squared Error) and R2 squared error ranging from best to worst fit. The time taken is given in seconds for each model to build.  Predictions for regression returned in a data frame.

prediction output:

Lazypredict for Classification

For this demonstration, I’ve taken the wine recognition dataset from scikit-learn, which is a multiclass classification(class 0, class 1, class 2) containing 13 features – Alcohol, Malic acid, Ash, Alkalinity of ash, Magnesium, Total phenols, Flavanoids, Nonflavanoid phenols, Proanthocyanins, Color intensity, Hue, OD280/OD315 of diluted wines, Proline. All of these features are numeric.

First 2 rows of dataset
from sklearn.datasets import load_wine
from lazypredict.Supervised import LazyClassifier
data = load_wine()
X =
y =
X_train, X_test, y_train, y_test = train_test_split(X, y,test_size=.2,random_state =0)
classifier = LazyClassifier(verbose=0,ignore_warnings=True, custom_metric=None, predictions=True)
models,predictions =, X_test, y_train, y_test)

Dataset is loaded and then separated into two variables. All the features are stored in variable X and target value in variable y. Then training 80% of the data and testing 20%.

Classifier parameters are the same as Regressor. Lastly, models are fitted.

models output:

Total of 30 models

For classification, we need to import LazyClassifier module from lazypredict.Supervised. The available evaluation metrics are – accuracy score, balanced accuracy, f1 score, and ROC AUC.

Predictions of each model:


LazyPredict would be very handy for selecting the more accurate model for the dataset being used from a variety of different models along with evaluation metrics within some seconds. Thereafter the best model could be tested against hyperparameters. Easy to implement and use as it performs all the preprocessing.

The complete code of the above implementation is available at the AIM’s GitHub repository. Please visit this link to find the notebook of this code.

More Great AIM Stories

Jayita Bhattacharyya
Machine learning and data science enthusiast. Eager to learn new technology advances. A self-taught techie who loves to do cool stuff using technology for fun and worthwhile.

AIM Upcoming Events

Regular Passes expire on 3rd Mar

Conference, in-person (Bangalore)
Rising 2023 | Women in Tech Conference
16-17th Mar, 2023

Early Bird Passes expire on 17th Feb

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023

Conference, Virtual
Deep Learning DevCon 2023
27 May, 2023

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox

What went wrong with Meta?

Many users have opted out of Facebook and other applications tracking their activities now that they must explicitly ask for permission.