Getting started with tensors from scratch in PyTorch

Various deep learning frameworks such as PyTorch do their computation on numbers in the form of tensors. Tensors are one of the basic fundamental aspects or types of data in deep learning.

Various deep learning frameworks such as PyTorch do their computation on numbers in the form of tensors. Tensors are one of the basic fundamental aspects or types of data in deep learning. In this article, we will discuss the tensors in detail, how to create them, and various operations that can be performed. For the demonstrations, we will create the tensors from scratch in PyTorch and perform a few basic operations on them. Following are the major points listed that are to be discussed in this article.   

Table of contents

  1. What is a Tensor?
  2. Tensor creation with PyTorch
    1. Using random data
    2. Using NumPy
    3. Using Pandas
  3. The operation that can be performed

Let’s first discuss what a tensor actually is.

What is a Tensor?

Tensors are algebraic objects that explain the multilinear relationship between sets of algebraic items associated with a vector space. Vectors and scalars, as well as other tensors, are examples of objects that tensors can map between. 

AIM Daily XO

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Tensors come in a variety of forms, including scalars and vectors (the most basic tensors), dual vectors, multilinear maps across vector spaces, and even some operations like the dot product. Tensors are defined independently of any basis, yet they are frequently referred to by their components in a basis associated with a certain coordinate system.

Simply we describe it as a container that can hold data in N dimensions. Tensors are generalizations of matrices to N-dimensional space that are frequently and incorrectly used interchangeably with the matrix (which is precisely a 2-dimensional tensor).


Download our Mobile App



From the above picture, it is clear what exactly a tensor looks like and it is incorporating a scalar, vector, and matrix. In the following section, we’ll see how we can create these tensors and how various operations can be performed on them. This all will be done by Python and PyTorch.

Tensor creation with PyTorch

In this section, we’ll see how tensors can be formed. As data science is concerned we usually deal with NumPy and pandas so we’ll see how from NumPy and pandas we can create tensors and also by generating data. Let’s first see by using random data. 

Importing the dependencies

import torch
import numpy as np
import pandas as pd

Using random data

As we are using PyTorch the method torch.rand(m,n) will create a m x n tensor with random data of distribution between 0-1. The below code shows the procedure to create a tensor and also shows the type and dtype of the function.

# using random numbers
ex = torch.rand(4,4)
print('Type: {} and dtype: {}'.format(ex.type(),ex.dtype))
print('\n',ex)

Output:

Type: torch.FloatTensor and dtype: torch.float32

 tensor([[0.6979, 0.9999, 0.7336, 0.0595],

        [0.0043, 0.8152, 0.5872, 0.9255],

        [0.3313, 0.9351, 0.2069, 0.3246],

        [0.5283, 0.6782, 0.4922, 0.7608]])

Eventually all the arrays, tensors are basically in the form of a list, we can just simply create a tensor by passing a list or multiple lists as shown below.

# using list
t = torch.tensor([[1, 2, 3], [4, 5, 6]])
print('Shape of above tensor',t.shape)
print('\n',t)

Output:

Shape of above tensor torch.Size([2, 3])

 tensor([[1, 2, 3],

        [4, 5, 6]])

Using NumPy

As stated above we can also convert NumPy arrays to tensors with Pytorch. This operation can be performed with torch.from numpy. Let’s apply the operation to a NumPy array. Similarly, we use the.numpy() method to return it to NumPy form.

# using numpy
numpy_arr = np.array([5.0, 6.0, 7.0, 8.0])
numpy_to_tensor = torch.from_numpy(numpy_arr)
print('Type: {} and dtype: {}'.format(numpy_to_tensor.type(),numpy_to_tensor.dtype))
print('\n',numpy_to_tensor)
# converting back to numpy
print('\nBack to numpy:',numpy_to_tensor.numpy())

Output:

Type: torch.DoubleTensor and dtype: torch.float64

 tensor([5., 6., 7., 8.], dtype=torch.float64)

Back to numpy: [5. 6. 7. 8.]

As we create arrays of zeros and ones using NumPy similarly we do that in torch both methods vary identical as shown below.

# similar to the np.zeros and np.ones
print('2 x 3 matrix of zeros:\n',torch.zeros(2,3, dtype=torch.int32))
print('\n3 x 2 matrix of ones:\n',torch.ones(3,2, dtype=torch.float32))

Output:

2 x 3 matrix of zeros:

 tensor([[0, 0, 0],

        [0, 0, 0]], dtype=torch.int32)

3 x 2 matrix of ones:

 tensor([[1., 1.],

        [1., 1.],

        [1., 1.]])

Using Pandas

A pandas Series basically represents the column from the data frame and when .values attribute over it convert it into NumPy values and as seen above ultimately we can convert it into tensors. 

# similarly from pandas
series = pd.Series([10,11,12,13,14,15])
series_to_tensor = torch.from_numpy(series.values)
print(series_to_tensor)

So by using the above-mentioned way we can create tensors. To summarize this section in PyTorch tensors can be created by a list of numbers, any type of data that fits into NumPy’s conditions. 

Now further we’ll discuss various operations that can be performed on tensors. 

Operations that can be performed on tensors 

Various and basic mathematical operations such as addition, subtraction, division, and multiplication can be done seamlessly in PyTorch.  To do so we just need to provide a respective math operator between two tensors or in another way we can do that like methods torch.add(). Below we’ll see the multiplication operation.

# create a tensor
test_t1 = torch.tensor([[1, 2, 3], [4, 5, 6]])
test_t2 = test_t1.clone() # make copy of a tensor
print(test_t1)
 
# using operator
test_t1 * test_t2
 
# using direct method
print('Multiplication:\n',torch.mul(test_t1, test_t2))

Output:

tensor([[1, 2, 3],

        [4, 5, 6]])

Multiplication:

 tensor([[ 1,  4,  9],

        [16, 25, 36]])

The above multiplication operation is basically a bit-wise operation where each respective bit is being multiplied. But in most cases, we need to perform matrix multiplication and this can be achieved by the torch.matmul() method.

torch.matmul(test_t1, test_t2.view(3,2)) # reshaping

Output:

tensor([[ 1,  4,  9],

        [16, 25, 36]])

In the above with the second tensor, I have used the .view() method which is basically used to reshape the tensor. The reason I used is that if we both tensor as it is the same in shape it violates the matrix multiplication rule so I have reshaped the second matrix accordingly. 

The next one is, similar to the data frame concatenation, we can concat tensors by using the method torch.cat().

# concatenation
torch.cat([test_t1, test_t2], dim=1)

Output:

tensor([[1, 2, 3, 1, 2, 3],

        [4, 5, 6, 4, 5, 6]])

In above as usual dim is referred to as the axis, on which axis concatenation is to be performed. 0 for row-wise and 1 column-wise.

Other more advanced operations such as min, max, argmax, argmin and various mathematical functions such as sigmoid, Relu can be directly applied to such tensors. 

Final words

Through this article, we have discussed the tensor which is basically a combination of multiple matrices. All the operations on tensors that we have done using PyTorch are very similar to NumPy. The reason behind using PyTorch is that for huge datasets and computation we push these tensors to the higher computational devices such as GPU and TPU. 

References 

Sign up for The Deep Learning Podcast

by Vijayalakshmi Anandan

The Deep Learning Curve is a technology-based podcast hosted by Vijayalakshmi Anandan - Video Presenter and Podcaster at Analytics India Magazine. This podcast is the narrator's journey of curiosity and discovery in the world of technology.

Vijaysinh Lendave
Vijaysinh is an enthusiast in machine learning and deep learning. He is skilled in ML algorithms, data manipulation, handling and visualization, model building.

Our Upcoming Events

27-28th Apr, 2023 I Bangalore
Data Engineering Summit (DES) 2023

23 Jun, 2023 | Bangalore
MachineCon India 2023

21 Jul, 2023 | New York
MachineCon USA 2023

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR