MITB Banner

Watch More

Different edge detection techniques with implementation in OpenCV

Edges are one of the most significant aspects of an image.
Listen to this story

In the early phases of vision processing, characteristics in pictures are identified that are significant to determining the structure and qualities of objects in a scene. One such feature is the edge. Edges are substantial local alterations in a picture that are useful for image analysis. Edges are often seen on the boundary between two distinct parts of a picture. The initial stage in retrieving information from photos is typically edge detection. In this article, we will be focusing on understanding the concept and techniques of edge detection offered by OpenCV. Following are the topics to be covered.

Table of contents

  1. About edge detection
  2. Different techniques for edge detection
  3. How does edge detection work?
  4. Learn edge detection with OpenCV

An edge point in an image is a point in the picture with coordinates at the location of a substantial local intensity shift. Let’s start with understanding the concept of edges and the information gained by them.

About edge detection

Edge detection is a method used in image processing to determine the boundaries (edges) of objects or areas inside an image. Edges are one of the most significant aspects of photographs. 

An image edge is a large local shift in picture intensity that is frequently linked with a discontinuity in either the image intensity or the first derivative (gradient). Image intensity discontinuities can be either due to step discontinuities or line discontinuity.

  • Step discontinuities occur when the picture intensity quickly shifts from one value on one side of the discontinuity to a different value on the opposite side of the discontinuity.
  • Line discontinuities occur when the visual intensity quickly changes but returns to its initial value after a short distance.

However, step and line borders are uncommon in real-world photographs. Sharp discontinuities in actual signals are uncommon due to low-frequency components or smoothing introduced by most sensing equipment. Step edges become ramp edges, and line edges become roof edges if intensity changes occur over a limited distance rather than instantaneously.

Analytics India Magazine

Why there is a need for edge detection?

Edge detection allows users to examine picture features for substantial changes in grey level. This texture marks the end of one section of the picture and the start of another. It decreases the quantity of data in a picture while preserving its structural features. 

For example, when detecting fingerprints, preprocessing the image with edge detection is beneficial. The “edges” in this example are the contours of the fingerprint, as opposed to the backdrop on which the fingerprint was produced. This reduces noise, allowing the machine to focus just on the contour of the fingerprint.

Are you looking for a complete repository of Python libraries used in data science, check out here.

Different techniques for edge detection

The edge detection could be divided into two parts the Gradients and Gaussians.

Gradients

The operation of recognising substantial local changes in a picture is known as edge detection. A step edge in one dimension corresponds to a local peak in the first derivative. The gradient is a function change measure, and an image can be thought of as an array of samples of a continuous function of picture intensity. A discrete approximation to the gradient can be used to identify substantial changes in grey levels in a picture. The gradient is defined as the vector and is the two-dimensional equivalent of the first derivative. The gradient has two crucial properties: 

  1. The vector points in the direction of the highest rate of rising of the function of the coordinates.
  2. The gradient’s magnitude equals the highest rate of rising of the function of the coordinates per unit distance in the vector’s direction.

However, it is usual practice to approximate the gradient magnitude using absolute values. The gradient’s direction is determined by vector analysis as the angle measured with respect to the x-axis. The size of the gradient is independent of the edge’s orientation. These are known as isotropic operators. There are three operators used to derive the first derivative which are the Sobel operator, Prewitt operator, Robert operator

  • Robert’s cross operator approximates the gradient magnitude simply. It is a gradient operator with a 2×2 matrix. The interpolated point is used to compute the differences. The Roberts operator approximates the continuous gradient at that point rather than at the spot.
  • The gradient’s magnitude is represented by the Sobel operator. Using a 3 x 3 neighbourhood for gradient computations avoids having the gradient computed about an interpolated point between pixels. Consider how the pixels are arranged around the pixel. This operator emphasises pixels that are closest to the mask’s centre. One of the most frequent edge detectors is the Sobel operator.
  • The Prewitt operator employs the identical equations as the Sobel operator, with the exception that the constant is one. This operator, unlike the Sobel operator, does not place any focus on pixels towards the centre of the masks.

Gaussians

The previously stated edge detectors calculated the first derivative and assumed the presence of an edge point if it was greater than a certain threshold. As a result, an excessive number of edge points are detected. A better strategy would be to discover and examine just the places with local maxima in gradient values. 

This means that at edge points, the first derivative will have a peak and the second derivative will have a zero crossing. Edge points may thus be identified by locating the zero crossings of the second derivative of the picture intensity. The second derivative is represented by two two-dimensional operators: the Laplacian of Gaussian and the Canny edge detector.

  • The Canny edge detector is a Gaussian first derivative that closely approximates the operator that optimises the product of signal-to-noise ratio and localization. The outcome of employing separable filtering to convolve the picture with a Gaussian smoothing filter is an array of smoothed data. Using the 2 x 2 first-difference approximations, the gradient of the smoothed array may be determined, yielding two arrays for the partial derivatives. 

The finite differences are averaged over the 2 x 2 square to compute the partial derivatives at the same place in the image. The gradient’s magnitude and orientation may be calculated using typical formulae for rectangular-to-polar conversion, where the arctan function takes two inputs and creates an angle across the whole circle of possible directions.

  • The Marr and Hildreth hat is used with Gaussian filtering to get the Laplacian of Gaussians. It is used to remove noise before edge enhancement. The edge points discovered by locating zero crossings of the second derivative of picture intensity are extremely susceptible to noise. An image should be convolved using a Gaussian filter initially in this method. 

This stage smoothes and lowers noise in a picture. Small structures and isolated noise spots will be filtered out. Because smoothing causes edge spreading, the edge detector only considers pixels with a locally maximum gradient as edges. This is accomplished by the use of zero crossings of the second derivative. Because it is an isotropic operator, the Laplacian is used to approximate the second derivative in two dimensional. To prevent detecting trivial edges, only zero crossings with related first derivatives greater than a certain threshold are chosen as edge locations.

How does edge detection work?

An edge is a line that connects two corners or surfaces in mathematics. The basic notion underlying edge detection is that regions with large variances in pixel brightness suggest an edge. As a result, edge detection is a measure of intensity discontinuity in a picture. There are three steps in edge detection algorithms.

Filtering

Because gradient computation based on intensity values of just two locations is subject to noise and other vagaries in discrete computations, filtering is typically utilised to improve an edge detector’s noise performance. However, there is a cost associated with edge strength and noise reduction. More filtering to eliminate noise reduces edge strength.

Enhancement

It is critical to assess variations in intensity in the vicinity of a point to enable edge identification. Enhancement highlights pixels with a large shift in local intensity values and is often achieved by determining the gradient magnitude.

Detection

The points with a distinct edge are the priority of the algorithm. However, many locations in an image have nonzero gradient values, and not all of these points are edges for a certain application. As a result, a method for determining which points are edge points should be utilised. Thresholding is frequently employed as a detection criterion.

Some algorithms use the fourth step which is localization

Localization 

If subpixel resolution is necessary for the application, the edge position can be inferred. The orientation of the edge can also be approximated.

It is crucial to remember that detection just shows the presence of an edge near a pixel in a picture and does not always offer an exact estimate of edge position or orientation. Edge detection errors are misclassification mistakes; false edges and missing edges. Probability distributions for position and orientation estimations are used to describe edge estimation mistakes. Because both processes are conducted by distinct algorithms and have different error models, we distinguish between edge detection and estimate.

Learn edge detection with OpenCV

Let’s start by importing necessary libraries which will be used in every section of the article.

import cv2
import matplotlib.pyplot as plt
import numpy as np

Read the image and preprocess

original_img = cv2.imread('Taj_mahal_hotel.jpg',cv2.IMREAD_COLOR)
gray = cv2.cvtColor(original_img, cv2.COLOR_BGR2GRAY)
blur_img = cv2.GaussianBlur(gray,(3,3),0)

Converting the image into grayscale is necessary because the edge detection operator works with grayscale. The Gaussian blur is used to reduce the noise in the image which is an important preprocessing step.

Plotting the query image after the conversion.

plt.figure(figsize=(10,10))
plt.imshow(blur_img)
plt.title("Converted to Grayscale")
plt.show()
Analytics India Magazine

Sobel edge detector

In this, we will be taking three different scenarios to compare the X-axis, Y-axis and XY axis edge detection.

sobelx = cv2.Sobel(src=blur_img, ddepth=cv2.CV_64F, dx=1, dy=0, ksize=5) 
filtered_image_x = cv2.convertScaleAbs(sobelx)
 
sobely = cv2.Sobel(src=blur_img, ddepth=cv2.CV_64F, dx=0, dy=1, ksize=5)
filtered_image_y = cv2.convertScaleAbs(sobely)
 
sobelxy = cv2.Sobel(src=blur_img, ddepth=cv2.CV_64F, dx=1, dy=1, ksize=5)
filtered_image_xy = cv2.convertScaleAbs(sobelxy)
 
plt.figure(figsize=(18,19))
plt.subplot(221)
plt.imshow(blur_img, cmap='gray')
plt.title('Original') 
plt.axis("off")
 
plt.subplot(222)
plt.imshow(filtered_image_x, cmap='gray')
plt.title('Sobel X') 
plt.axis("off")
 
plt.subplot(223)
plt.imshow(filtered_image_y, cmap='gray')
plt.title('Sobel Y') 
plt.axis("off")
 
plt.subplot(224)
plt.imshow(filtered_image_xy, cmap='gray')
plt.title('Sobel X Y')
plt.axis("off")
plt.show()
Analytics India Magazine

Both the single directional edge detection was not able to detect the edges. But the bidirectional edge detection had done a pretty good job in detecting the edges of the objects in the image.

Canny edge detector

The method uses a three-stage procedure to extract edges from an image, and when blurring is applied, the process totals four stages.

edges = cv2.Canny(image=blur_img, threshold1=100, threshold2=200)
 
plt.figure(figsize=(18,19))
plt.subplot(121)
plt.imshow(blur_img, cmap='gray')
plt.title('Original') 
plt.axis("off")
 
plt.subplot(122)
plt.imshow(edges, cmap='gray')
plt.title('Edge image')
plt.axis("off")
plt.show()
Analytics India Magazine

Laplacian edge detector

The Laplacian edge detector compares an image’s second derivatives. It counts the number of times the first derivative changes in a single pass.

laplacian = cv2.Laplacian(blur_img,5,cv2.CV_64F)
filtered_image = cv2.convertScaleAbs(laplacian)
plt.figure(figsize=(18,19))
plt.subplot(121)
plt.imshow(blur_img, cmap='gray')
plt.title('Original') 
plt.axis("off")
 
plt.subplot(122)
plt.imshow(filtered_image, cmap='gray')
plt.title('Edge image')
plt.axis("off")
plt.show()
Analytics India Magazine

Comparing all the three methods according to the results the Canny edge detector did a pretty good job detecting the edges of the objects.

Conclusions

In a digital image, edges are large local variations in intensity. An edge is a collection of linked pixels that defines a border between two distinct areas. With this article, we have understood the concept and operation of edge detection with an implementation using OpenCV.

References

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Sourabh Mehta

Sourabh Mehta

Sourabh has worked as a full-time data scientist for an ISP organisation, experienced in analysing patterns and their implementation in product development. He has a keen interest in developing solutions for real-time problems with the help of data both in this universe and metaverse.

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories