In the modern era of data science, various kinds of data types need to be dealt with. For example, an image as a data can tell us a lot of information. That information can lead us to a lot of visionary ideas. There are various techniques to deal with image data and one of the most important techniques is to find out the content of an image. To extract a smaller object from a bigger or multi objects holder image using an object image is called template matching.
Template Matching
Template matching is a technique to extract or highlight the area or object of an image using a smaller image of that area. The basic goal of template matching is to find a smaller image or template into a larger image.
As we can understand from the above statement, for template matching, we require two images one is a source image holding the template image under it, and the other is a template image which is required to find out from the source image.
Basically, in template matching, we find out the location in the source image of the template image. Here we can understand that it is required to have the size of the source image larger than the template image.
In this article, we are going to see how we can perform this using python. To perform template matching using python, we have got an open-source library called OpenCV which we mainly use to analyze and augment image and video data. For a better understanding of the subject, let’s have an overview of openCV.
OpenCV
Open cv is a library to perform computer vision using python. This is a python binding. Instead of python, we can use it in different programming languages like C++ and java. We can perform many tasks using OpenCV like image processing, image blending, and composition of images. And also, it can be integrated with many libraries like NumPy and pandas or scipy. Those libraries are highly optimized libraries for numerical operation and data augmentation operations. There can be many applications of OpenCV like
- Image recognition.
- Face recognition.
- Image differencing.
- Image blending.
- Image composition.
- Object recognition.
- Automated inspection and surveillance.
Later in the article, we will talk about template matching, and this operation comes under the subject of image processing. Lets have an overview of image processing so that we can imply it in our next projects as well.
Image Processing
As the name suggests, image processing is the domain of operation where we perform operations to extract information, edit or enhance the images. More formally, we can say that image processing is the process of analysing and editing the digitized images to enhance the quality of the image or analysis.
Mathematically the image can be considered as the two-dimensional array(in black and white image) or three-dimensional array ( in coloured image). In image processing, we apply some methods to those arrays to get the changes on the image data.
In image processing, the major term we work is the image pixel, which is nothing but the value of x and y(in black and white image) or x, y and z (in colored image) in the array of images. The magnitude of the x,y and z defines the pixel sizes and colour of the images.
Basically, image processing consists of three processes.
- Importing the image
- Analysis and manipulation of the image
- Output (result of analysis and manipulation)
Let’s have a look at the steps in google colab.
Importing the libraries:
input:
import cv2
from google.colab.patches import cv2_imshow
Reading the image.
input:
pic = cv2.imread('/content/drive/MyDrive/Yugesh/Template Matching (find object in an image)/photo-1509909756405-be0199881695.jpeg')
Showing the image.
Input:
cv2_imshow(pic) cv2.waitKey(0)
Output:

This is how we can read and show our images in the google colab.
Printing the array format of the image.
Input:
print(pic)
Output:

Here we can see the array format of a coloured image which is a three-dimensional array. We can convert this image into a grayscale image.
Input:
pic = cv2.cvtColor(pic, cv2.COLOR_RGB2GRAY)
Viewing the grayscale image.
Input:
cv2_imshow(pic) cv2.waitKey(0)
Output:

Let’s check the array format for the converted image.
Input:
print(pic)
Output:

Here in the above outputs, we can see how the changes occurred mathematically in manipulating any image.
We can save the image or the final output.
Input:
cv2.imwrite("graypic.jpg", pic)
Output:

So this is how we can perform the basics image processing operations.
But the article is mainly focused on template matching and object detection, so in the next steps, we will perform how we can detect any template image from the source image.

We will use the above image as our source image for template matching, and we are going to match or detect the football in the image using Opencv in python.

This is the football image we are going to use for the matching purpose.
Code Implementation of Template Matching
Importing the libraries.
Input:
import numpy as np import cv2
Reading the images
Input:
src = cv2.imread("/content/drive/MyDrive/Yugesh/Template Matching (find object in an image)/souce.jpg") Temp = cv2.imread("/content/drive/MyDrive/Yugesh/Template Matching (find object in an image)/tamplet.png")
Most image processing operational methods work on grayscale images; that is why we need to convert the images into grayscale images.
Converting the images into grayscale.
Input:
src = cv2.cvtColor(src, cv2.COLOR_RGB2GRAY) temp = cv2.cvtColor(Temp, cv2.COLOR_RGB2GRAY)
Pulling the height and the width of the src image and temp image into height, width, H and W objects.
Input:
height, width =src.shape height, width
Output:

Input:
H, W = temp.shape H, W
Output:

Here we can see the shape of our images.
There are methods that cv2 provides us to perform template matching. And in this article, we will perform and test all of them so that we can easily choose any one of them for our future projects. So here I am defining the list of all the methods.
Input:
methods = [cv2.TM_CCOEFF, cv2.TM_CCOEFF_NORMED, cv2.TM_CCORR, cv2.TM_CCORR_NORMED, cv2.TM_SQDIFF, cv2.TM_SQDIFF_NORMED]
In the next step, we will use the list of methods for a loop to get the results for all the methods.
Input:
for method in methods: src2 = src.copy() result = cv2.matchTemplate(src2, temp, method) min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(result) print(min_loc, max_loc) if method in [cv2.TM_SQDIFF,cv2.TM_CCORR]: lacation = min_loc else: location = max_loc bottom_right = (location[0] + W, location[1] + H) cv2.rectangle(src2, location,bottom_right, 255, 5) cv2_imshow(src2) cv2.waitKey(0) cv2.destroyAllWindows()
Output:
(267, 403) (672, 83)

(544, 817) (672, 83)

(1054, 77) (207, 163)

(88, 232) (672, 83)

(672, 83) (270, 405)

(672, 83) (0, 0)

We can see that all 5 out of 6 methods match perfectly for this image and one method cv2.TM_SQDIFF_NORMED did not perform well. We can see the X and Y locations where the templated image matches the source image in the upper part of the image. And also, the match place is covered by the rectangle in the image.
There can be various methods to perform text matching; this one is one of them, and yes, as we can see, this is working very fine, and it is simple to perform. We don’t need to have so much knowledge of internal spaces and terminology of image processing.
References
- Opencv2 .
- Demo image.
- Source image and Template image.
- Google colab for codes.