MITB Banner

A Beginner’s Guide To Neural Network Modules In Pytorch

A Beginner’s Guide To Neural Network Modules In Pytorch writing a brief code that would explain it and then would give a simple explanation

Pytorch is a deep learning library which has been created by Facebook AI in 2017. It is prominently being used by many companies like Apple, Nvidia, AMD etc. You can read more about the companies that are using it from here.

It is also often compared to TensorFlow, which was forged by Google in 2015, which is also a prominent deep learning library. 

You can read about how PyTorch is competing with TensorFlow from here.

There are a lot of functions and explaining each of them is not always possible, so will be writing a brief code that would explain it and then would give a simple explanation for the same. If you want to read more about it, click on the link that is shared in each section.

Installation

Installation command is different for different OS, you can check the best one for you from here.

#dependency 

import torch

We’d have a look at tensors first because they are really important.

Let us understand what a tensor is.

Tensor is in simple words is a multidimensional array which is also generalised against vectors and matrices. Now let us see what all things can we do with it.

# a few lucid examples of tensor
a=torch.tensor(3)
a=torch.tensor([1,3])
a=torch.tensor([[1,2],[3,4]])
# a tensor, a bit complex one
tensor = torch.Tensor(
    [
     [[1, 2], [3, 4]], 
     [[5, 6], [7, 8]], 
     [[9, 0], [1, 2]]
    ]
)
tensor.shape #to find the shape of the tensor
torch.Size([3, 2, 2])

Index of tensors
tensor[1]

Output
tensor([[5., 6.],
        [7., 8.]])

There are a lot of other functions for which you can refer to the official documentation which is mentioned at the last of this article.

Initialising like tensors

Like tensors are the ones which have the same shape as that of others.

torch.ones_like(tensor)
tensor([[[1., 1.],
         [1., 1.]],
        [[1., 1.],
         [1., 1.]],
        [[1., 1.],
         [1., 1.]]])

Here the shape of this would be the same as that of our previous tensor and all the elements in this tensor would be 1.

torch.zeros_like(example_tensor)
tensor([[[0., 0.],
         [0., 0.]],
        [[0., 0.],
         [0., 0.]],
        [[0., 0.],
         [0., 0.]]])

All the elements of this tensor would be zero.

#here we would be creating a tensor whose every element would be a normal distribution.
torch.randn_like(tensor)
tensor([[[-0.3675,  0.2242],
         [-0.3378, -1.0944]],
        [[ 1.5371,  0.7701],
         [-0.1490, -0.0928]],
        [[ 0.3270,  0.4642],
         [ 0.1494,  0.1283]]])

Let us take a look at some basics operations on Tensors

(tensor - 5) * 2
Output
tensor([[[ -8.,  -6.],
         [ -4.,  -2.]],
        [[  0.,   2.],
         [  4.,   6.]],
        [[  8., -10.],
         [ -8.,  -6.]]])

To read more about tensors, you can refer here.

You can have a look at Pytorch’s official documentation from here.

We will see a few deep learning methods of PyTorch.

Pytorch’s neural network module

#dependency
import torch.nn as nn
nn.Linear

It is to create a linear layer. Here we pass the input and output dimensions as parameters.

Here it is taking an input of nx10 and would return an output of nx2.

linear = nn.Linear(10, 2)
example_input = torch.randn(3, 10)
example_output = linear(example_input)
example_output
Output:
tensor([[ 0.2102,  0.5055],
        [-0.5417,  0.8288],
        [ 0.1755,  0.3779]], grad_fn=<AddmmBackward>)

nn.Relu

It performs a relu activation function operation on the given output from linear.

relu = nn.ReLU()
relu_output = relu(example_output)
relu_output
Output:
tensor([[0.2900, 0.0000],
        [0.4298, 0.4173],
        [0.4861, 0.0000]], grad_fn=<ReluBackward0>)

nn.BatchNorm1d

It is a normalisation technique which is used to maintain a consistent mean and standard dev among different batches of the of input.

batchnorm = nn.BatchNorm1d(2)
batchnorm_output = batchnorm(relu_output)
batchnorm_output
Output
tensor([[-1.3570, -0.7070],
        [ 0.3368,  1.4140],
        [ 1.0202, -0.7070]], grad_fn=<NativeBatchNormBackward>)

You can read about batchnorm1d and batchnorm2d from their official doc.

nn.Sequential

It is to create a sequence of operations in one go.

mlp is the name of variable which stands for multilayer perceptron.

mlp_layer = nn.Sequential(
    nn.Linear(5, 2),
    nn.BatchNorm1d(2),
    nn.ReLU()
)
test_example = torch.randn(5,5) + 1
print("input: ")
print(test_example)
print("output: ")
print(mlp_layer(test_example))
Output
input: 
tensor([[ 1.7690,  0.2864,  0.7925,  2.2849,  1.5226],
        [ 0.1877,  0.1367, -0.2833,  2.0905,  0.0454],
        [ 0.7825,  2.2969,  1.2144,  0.2526,  2.5709],
        [-0.4878,  1.9587,  1.6849,  0.5284,  1.9027],
        [ 0.5384,  1.1787,  0.4961, -1.6326,  1.4192]])
output: 
tensor([[0.0000, 1.1865],
        [1.5208, 0.0000],
        [0.0000, 1.1601],
        [0.0000, 0.0000],
        [0.7246, 0.0000]], grad_fn=<ReluBackward0>)

How nn.Sequential is important and why it is needed, read it from here.

Optimisers

import torch.optim as optim
adam_opt = optim.Adam(mlp_layer.parameters(), lr=1e-1)
Here lr stands for learning rate and 1e-1 means 0.1
#now let us look at the training loop
train_example = torch.randn(100,5) + 1
adam_opt.zero_grad()
# We'll use a simple loss function of mean distance from 1
# torch.abs takes the absolute value of a tensor
cur_loss = torch.abs(1 - mlp_layer(train_example)).mean()
cur_loss.backward()
adam_opt.step()
print(cur_loss)

A little bit of theory:

requires_grad_()

This means that even if PyTorch wouldn’t normally store a grad for that particular tensor, it will for that specified tensor.

with torch.no_grad():

PyTorch will usually calculate the gradients as it proceeds through a set of operations on tensors. This can often take up unnecessary computations and memory, especially if you’re performing an evaluation. However, you can wrap a piece of code with torch.no_grad() to prevent the gradients from being calculated in a piece of code.

detach():

Sometimes, you want to calculate and use a tensor’s value without calculating its gradients. For example, if you have two models, A and B, and you want to directly optimise the parameters of A with respect to the output of B, without calculating the gradients through B, then you could feed the detached output of B to A. There are many reasons you might want to do this, including efficiency or cyclical dependencies (i.e. A depends on B depends on A).

We are now making the nn class.

class ExampleModule(nn.Module):
    def __init__(self, input_dims, output_dims):
        super(ExampleModule, self).__init__()
        self.linear = nn.Linear(input_dims, output_dims)
        self.exponent = nn.Parameter(torch.tensor(1.))
    def forward(self, x):
        x = self.linear(x)
        # This is the notation for element-wise exponentiation, 
        # which matches python in general
        x = x ** self.exponent 
        return x
example_model = ExampleModule(10, 2)
list(example_model.parameters())
Output 
[Parameter containing:
 tensor(1., requires_grad=True),
 Parameter containing:
 tensor([[ 0.2789,  0.2618, -0.0678,  0.2766,  0.1436,  0.0917, -0.1669, -0.1887,
           0.0913, -0.1998],
         [-0.1757,  0.0361,  0.1140,  0.2152, -0.1200,  0.1712,  0.0944, -0.0447,
           0.1548,  0.2383]], requires_grad=True),
 Parameter containing:
 tensor([ 0.1881, -0.0834], requires_grad=True)]


This is the output of the class that we had created:
input = torch.randn(2, 10)
example_model(input)

Conclusion

The aim of this article is to give briefings on Pytorch. We had discussed its origin and important methods in it like that of tensors and nn modules. There’s a lot to it and simply isn’t possible to mention everything in one article. That is why it is kept concise, giving you a rough idea of the concept. If you want to read more about it, you can read the official documentation thoroughly from here.

Hope you liked the article 

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Bhavishya Pandit

Bhavishya Pandit

Understanding and building fathomable approaches to problem statements is what I like the most. I love talking about conversations whose main plot is machine learning, computer vision, deep learning, data analysis and visualization. Apart from them, my interest also lies in listening to business podcasts, use cases and reading self help books.

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories