# Guide to Different Padding Methods for CNN Models

the convolutional layers reduce the size of the output. So in cases where we want to increase the size of the output and save the information presented in the corners

Convolutional neural networks(CNNs) are used every day for tackling various problems occurring in image processing and predictive modelling or classification tasks. The most popular application for CNNs is to analyze image data. As we know mathematically, every image in any dataset is a matrix of its pixel values. When working with the simple CNN for an image, we get the output reduced in size which we can definitely consider as the loss of data. And sometimes it becomes very difficult to generate a proper result according to our requirements. In that case, where we don’t want the shape or our outputs to reduce in size, the addition of more layers in the data can help and that addition can be done by padding.

In this article, we will discuss padding with its importance and how to use it with CNN models. We will also discuss different methods of padding with how they can be implemented. Below are the important points that we are going to cover in this article.

#### THE BELAMY

1. Problem with Simple Convolutional Layers
4. How to use Padding in CNN Model

Let’s begin with understanding the problem faced with simple convolutional layers.

## Problem with Simple Convolution Layers

A simple CNN gives results For a grayscale image of size (n x n)  with  (f x f) filter/kernel size is (n – f + 1) x (n – f + 1). For example in any convolution operation with a (8 x 8) image and (3 x 3) filter the output image size will be (6 x 6). So this happens every time when processing images, the output of the layers is shrunk in comparison to the input. Also, the filters we are using do not focus on the corners every time when it moves on the pixels. For example,

The above image is an example of the movement of a filter of size (3 x 3) on an image of size (6 x 6). We can clearly see that a corner pixel A is coming under the filter in only one movement where b is coming in three movements and c is coming under 9 movements. It basically shows that the model will work with pixel C very fine and it will misinterpret pixel A.

This will cause the loss of information available in the corners and also the output from the layers is reduced and reduced information will create confusion for the next layers. This problem of the model can be reduced by the padding layers.

As we just discussed, the convolutional layers reduce the size of the output. So in cases where we want to increase the size of the output and save the information presented in the corners we can use padding layers where padding helps by adding extra rows and columns on the outer dimension of the images. So the size of input data will remain similar to the output data.

Padding basically extends the area of an image in which a convolutional neural network processes. The kernel/filter which moves across the image scans each pixel and converts the image into a smaller image. In order to work the kernel with processing in the image, padding is added to the outer frame of the image to allow for more space for the filter to cover in the image. Adding padding to an image processed by a CNN allows for a more accurate analysis of images.

Image source

In the above image, we added the padding layer(grey color rows and columns) on an image and this is how it saves the size of the image matrix from reducing the size.

There are three types of padding:

Before introducing different types of padding we should start with discussing how we make a model so that we can get a proper difference between a no padding model and a padding model.

Here I am using Keras for building a model which can be fitted into data when required. A simple CNN model starts from a sequential instance. After this instance, we can add some convolutional layers to the model.

model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(28, 28, 1)))
model.add(Conv2D(128, kernel_size=(3, 3), activation='relu'))

Let’s go to the summary of the model so that we can know how convolutional layers reduce the size of the input.

In the image which is a summary of the model which we have created, we can clearly see that with every convolutional layer the size output is changing in decreasing order. By padding, we can resolve this problem.

In this type of padding, the padding layers append zero values in the outer frame of the images or data so the filter we are using can cover the edge of the matrix and make the inference with them too.

Below the model is an example of how we can create a model with the same padding.

models = Sequential()
models.add(Conv2D(128, kernel_size=(3, 3), activation='relu', padding='same'))

Let’s check the summary.

Here we can compare the summary of the model with the same padding and without padding. We can clearly see here the size of out is remaining the same. For each layer, the feature map dimensions are the same. This is how we can overcome the problem of reduction of output size. Every time the output shape is 28 * 28.

This type of padding can be considered as no padding. Why is there no padding we will understand after the example of a model? Let’s just look at the example.

modelv = Sequential()
modelv.add(Conv2D(128, kernel_size=(3, 3), activation='relu', padding = 'valid'))

Here we have created a model with three two dimensional convolutional layers by adding valid padding on them.

Let’s check the summary of the model.

Here we can see that there is no change in the summary in comparison to the summary of the model without padding. So here one question will arise in the reader’s mind: why is it required?

So in this, we really don’t apply padding but we assume that every pixel of the image is valid so that the input can get fully covered by the filter wherein a simple model assumes corners are invalid. And do not consider them in the coverage area.

Note- while using valid padding use them with max-pooling layers in the same way.

In the same padding, we use every point/pixel value while learning the model wherein the valid padding, we consider every point as valid so nothing can be left; it does not work with the size of the input, it works on validation of pixel value.

In VALID (i.e. no padding mode), Tensorflow will drop right and/or bottom cells if your filter and stride don’t fully cover the input image. Where the same padding model tries to spread similar padding across the frame of the image.

This is a special type of padding and basically works with the one-dimensional convolutional layers. We can use them majorly in time series analysis. Since a time series is sequential data it helps in adding zeros at the start of the data. Which also helps in predicting the values of early time steps.

We use this padding with convolutional layers which basically means every layer is using the learnt part of the previous layer and again the learnt part of the will go to the next layer and this is how it all works. So considering a time series if any forecasted value can be used in forecasting the next time step value it can become more helpful and accurate.

A model architecture consisting of causal padding with a kernel size of four can be represented as:

Image source

And a whole time series model architecture with a fully connected layer and causal padding can be like:

Let’s see how we can implement this.

from keras.layers import Conv1D
modelc = Sequential()
modelc.add(Conv1D(128, kernel_size=4, activation='relu', padding='causal'))

Here we have made a model with three one dimensional layers and causal padding.

Let’s check the summary.

Here we can see that the size of the output is not changing which means that it also helps by adding zero values but in the one-dimensional convolutional layer.

## Final Words

Here in the article, we have seen how we can deal with the problem of the simple convolutional layer by using different padding. We have seen how these padding methods are different from each other. Where we can use the same and valid padding with two-dimensional convolutional layers and causal padding with one-dimensional convolutional layers.

References:

## More Great AIM Stories

### Implementing A Recurrent Neural Network (RNN) From Scratch

Yugesh is a graduate in automobile engineering and worked as a data analyst intern. He completed several Data Science projects. He has a strong interest in Deep Learning and writing blogs on data science and machine learning.

## Our Upcoming Events

Conference, in-person (Bangalore)
MachineCon 2022
24th Jun

Conference, Virtual
Deep Learning DevCon 2022
30th Jul

Conference, in-person (Bangalore)
Cypher 2022
21-23rd Sep

### Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

### Telegram Channel

Discover special offers, top stories, upcoming events, and more.

##### MORE FROM AIM

LTI and Mindtree both play in Analytics services businesses, just like most other large IT/ITes service providers. But, what would the analytics services business of the merged entity look like?

##### GitHub now offers math support in markdown

GitHub’s math rendering capability uses MathJax; an open-source, JavaScript-based display engine.

Meta recently organised messaging event called ‘Conversations.’

##### Wipro announces 40,000 sq.ft. Innovation Studio in Texas

The studio will leverage Wipro’s deep reservoir of IPs, patents, and innovation DNA.

##### Google’s facial recognition tech to replace smart cards in Bengaluru metro trains￼

BMRCL plans to introduce the technology at its automatic fare collection gates.

##### Data science hiring process at DealShare

In the next few months, DealShare looks to grow its data science team by 15-20 members.

##### DeepMind’s AlphaFold 2 is half of the story

The idea was if I give you a sequence of amino acids, can you predict what will be the structure or the shape that it will take in the 3D space?

##### Lenskart invests USD 2 Mn in location intelligence platform GeoIQ

GeoIQ’s AI-based location tool will help Lenskart with its aggressive store rollout strategy.

##### TensorFlow v2.9 released: Major highlights

The main highlights of this release are performance enhancement with oneDNN and the release of a new API for model distribution, called DTensor