MITB Banner

Hands-On Guide For Non-Linear Regression Models In R

Share

It is a truth universally acknowledged that not all the data can be represented by a linear model. By definition, non-linear regression is the regression analysis in which observational data is modeled by a function which is a non-linear combination of the parameters and depends on one or more independent variables. Non-linear regression is capable of producing a more accurate prediction by learning the variations in the data and their dependencies. 

In this tutorial, we will look at three most popular non-linear regression models and how to solve them in R. This is a hands-on tutorial for beginners with the good conceptual idea of regression and the non-linear regression models.

Pre-requisites:

  • Understanding of Non-Linear Regression Models
  • Knowledge of programming

Polynomial Regression

Polynomial regression is very similar to linear regression but additionally, it considers polynomial degree values of the independent variables. It is a form of regression analysis in which the relationship between the independent variable X and the dependent variable Y is represented as an nth degree polynomial in x. The model can be extended to fit multiple independent factors.

Consider for example a simple dataset consisting of only 2 features, experience and salary. Salary is the dependent factor and Experience is the independent factor. Unlike Simple linear regression which generates the regression for Salary against the given Experiences, the Polynomial Regression considers up to a specified degree of the given Experience values. That is, Salary will be predicted against Experience, Experience^2,…Experience ^n.

Code

The Polynomial Regression is handled by the inbuilt function ‘lm’ in R. After loading the dataset follow the instructions below. 

Creating the Polynomial Regressor Model and fitting it with Training Set

dataset$X2 = dataset$X^2
dataset$X3 = dataset$X^3
dataset$X4 = dataset$X^4
poly_regressor = lm(formula = Y ~ .,data = dataset)


The first 3 lines calculate the nth degree polynomial of the independent variable X for each row of observations and add them as features into the original dataset. Here we have calculated till the 5th degree denoted as X4

  • formula: Used to differentiate the independent variable(s) from the dependent variable. In case of multiple independent variables, the variables are appended using ‘+’ symbol. Eg. Y ~ X1 +  X2 + X3 + …
  • X: independent Variable or factor. The column label is specified
  • Y: dependent Variable. The column label is specified.
  • data: The data the model trains on, training set.

Predicting the Y value for a new X

predict(poly_regressor,newdata = data.frame(X = value, X2 = value^2, X3 = value^3, X4 = value^4))

This line predicts the value of the dependent factor for a new given value of independent factor.

  • regressor: The regressor model that was previously created for training.
  • newdata: The new observation or set of observations that you want to predict Y for. Accepts arguments as dataframes.
  • value: replace this with a number you want to predict Y for.


Visualizing the predictions

install.packages('ggplot2') #install once
library(ggplot2)
X_grid = seq(min(dataset$X), max(dataset$X), 0.1)
ggplot() +
geom_point(aes(x = dataset$X, y = dataset$Y),colour = 'black') +
geom_line(aes(x = X_grid, y = predict(poly_reg, newdata = data.frame(X = X_grid,X2 = X_grid^2, X3 = X_grid^3, X4 = X_grid^4))),colour = 'red')+
ggtitle('Polynomial Regression')
xlab('X')
ylab('Y')

This block of code represents the dataset in a graph. ggplot2 library is used for plotting the data points. To obtain a smooth curve the axis is scaled to 1/10th of X (X_grid).

  • geom_point() : This function scatter plots all data points in a 2 Dimensional graph
  • geom_line() : Generates or draws the regression line in 2D graph
  • ggtitle(): Assigns the title of the graph
  • xlab: Labels the X- axis
  • ylab: Labels the Y-axis

Decision Tree Regression

Decision Tree Regression works by splitting a dimension into different sections containing a minimum number of data points and predicts the result for a new data item by calculating the mean value of all the data points in the section it belongs to. That is it breaks down a dataset into smaller and smaller subsets while at the same time an associated decision tree is developed incrementally. Decision tree builds regression or classification models in the form of a tree structure

Code

The Decision Tree Regression is handled by the rpart library.

Installing and Importing Libraries

install.packages('rpart') #install once
library(rpart) # importing the library

Creating the Decision Tree Regressor and providing the Training Set

decisionTree_regressor = rpart(formula = Y ~ .,data = dataset, control = rpart.control(minsplit = 1))

The expression ‘Y ~ .” takes all variables except Y in the training_set as independent variables.

  • formula: Used to differentiate the independent variable(s) from the dependent variable.In case of multiple independent variables, the variables are appended using ‘+’ symbol. Eg. Y ~ X1 +  X2 + X3 + …
  • control: parameters that control the formation of the decision tree.
  • minsplit: a controller used to specify the number of observations that must exist in a node in order for a split to be attempted.
  • X: Independent Variable or factor. The column label is specified.
  • Y: Dependent Variable. The column label is specified.
  • data : The data the model trains on, training set.

Predicting the values for the test set

y_pred = predict(decisionTree_regressor, newdata = data.frame(X = value))

This line predicts the Y value for a given X value. Replace ‘value ‘ with real value.

Visualizing the test set results

library(ggplot2)
x_grid = seq(min(dataset$X), max(dataset$X), 0.01)
ggplot() +
geom_point(aes(dataset$X, dataset$Y),color= 'red') +
geom_line(aes(x_grid, predict(decisionTree_regressor, data.frame(X = x_grid)), color = 'black'))+
ggtitle('Y vs X (Decision Tree Regression) ')
xlab('X')
ylab('Y') 

This code plots the data points and the regressor on a 2 Dimensional graph. For more precision, the axis is scaled to 1/10th of X (X_grid).

  • geom_point() : This function scatter plots all data-points in a 2 Dimensional graph
  • geom_line() : Generates or draws the regression line in 2D graph
  • ggtitle(): Assigns the title of the graph
  • xlab: Labels the X- axis
  • ylab: Labels the Y-axis

plot(decisionTree_regressor)

This line displays the tree structure generated.

Random Forest Regression

Random Forest Regression is one of the most popular and effective predictive algorithms used in Machine Learning. It is a form of ensemble learning where it makes use of an algorithm multiple times to predict and final prediction is the average of all predictions. Random Forest Regression is a combination of multiple Decision Tree Regressions. Hence the name Forest.

Code

The library randomForest is used for handling Random Forest Regression in R

Installing and Importing the Library

install.packages('randomForest') #install once
library(randomForest) # importing the library

Creating the Random Forest Regressor and fitting it with Training Set

random_forest_regressor = randomForest(x = training_set$X, y = training_set$Y, ntree = 300)

This line creates a Random Forest Regressor and provides the data to train.

  • X: independent variable
  • Y: dependent variable
  • ntree: the number of decision trees you want to generate to predict.

Predicting the value for a new X

y_pred = predict(regressor, data.frame(X = value))

Note:

Replace ‘value’ with a real number you want to predict Y for.

Share
Picture of Amal Nair

Amal Nair

A Computer Science Engineer turned Data Scientist who is passionate about AI and all related technologies. Contact: amal.nair@analyticsindiamag.com
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.