MITB Banner

Computer Vision: Write Your Motion Detection Code Using OpenCV

Share

In this article, I am going to explain how we can do motion detection with OpenCV and Python. Before starting you must be clear about the advantages of artificial vision and how we can start programming and developing our own artificial vision applications. The first step is to prepare the system, using Anaconda Navigator and installing the OpenCV library for Python.

Motion detection with OpenCV and Python

In many applications based on machine vision, motion detection is used. For example, when we want to count the people who pass by a certain place or how many cars have passed through a toll. In all these cases, the first thing we have to do is extract the people or vehicles that are at the scene.

There are different techniques, methods, or algorithms that enable motion detection. As in other subjects, there are no generic cases in artificial vision. It will depend on each situation to use one or the other. Let us have a look at some methods used in OpenCV and Computer Vision.

Background subtraction

Background subtraction consists of taking an image of the scene without movement and subtracting the successive frames that we are obtaining from a video. The image without movement is called the background. The frame that we are going to analyze would be the foreground. Therefore, we have a background from which we are subtracting the different frames.

A picture containing indoor, photo, sitting, small

Description automatically generated

The result, as you can see in the image, is a scene with a black background. Where motion is detected, the color is different. It is a very simple technique. It does not require that the subject or object that is being detected must have something that identifies it as a sensor, beacon or special suit. In contrast, background subtraction is very sensitive to changes in lighting such as shadows or changes caused by natural light. Another disadvantage is that if the subject or object has a similar color to the background, either the movement is not detected or it is poorly detected.

Within the background subtraction technique, there are two modalities. It will depend on how the background is obtained, with reference image or with previous frames.

Reference Image Subtraction

This modality consists of having a reference image where there is no moving object. Moving elements are obtained from this image by subtracting each frame from the reference image. Typically the first frame of a video sequence is taken.

It is very sensitive to changes in light. Imagine that you take the reference image in a room with natural light. At 10:00 in the morning there will be light conditions. But at 18:00 in the evening there will be others. It is also very sensitive to camera movements. A very small movement can cause false positives to be detected at the scene. On the contrary, this method works very well in environments with controlled lighting and perfectly detects the silhouette of moving objects.

Subtraction with previous frames

In this mode, the background is obtained from the previous frames. The technique consists of taking a reference image, letting some time pass applying a delay and begin to compare with the frames that we are obtaining. This delay will depend on factors such as the speed of the objects.

One of the biggest disadvantages is that if the moving object or person stays still, it is not detected. It is not able to detect silhouettes. However, it is a fairly robust method to changes in lighting and camera movements and it gets stabilized after a while.

Phases of the motion detection process

Now we will see what are the phases that we must follow to create an algorithm that allows us to detect movement with OpenCV. The process will perform various tasks.

Grayscale conversion and noise removal

Subtraction operation between the background and the foreground.

Apply a threshold to the image resulting from the subtraction.

Detection of contours or blobs

Something very common in computer science, particularly in computer vision, are parameters. Each parameter can have a range of values. The correct value will depend on many factors. It is in our hands to adapt each value to a specific situation.

A picture containing text, crossword, scoreboard, kit

Description automatically generated

There are different techniques that allow us to estimate which is the set of values ​​that gives us the best results. One of them is Simulated Annealing. I am not going to talk to you about this technique since it is very complex and is not the object of this article, but you should take it into account when your goal is to make a professional application. It is the only way to adjust the parameters to obtain acceptable results.

Grayscale conversion and denoising

Before performing any operation on images, it is a good idea to convert to grayscale. It is less complex and more optimal to work with these types of images. On the other hand, the noise caused by the camera itself and by the lighting must be minimized. This is done through averaging each pixel with its neighbors. It is commonly known as smoothing.

OpenCV provides the methods that allow us to convert to grayscale and smooth the image.

1.# Convert the image to grayscale

2.gray = cv2.cvtColor (frame, cv2.COLOR_BGR2GRAY)

1.# Image smoothing

2.gray = cv2.GaussianBlur (gray, (21, 21), 0)

Threshold application

In this part of the process what we do is keep those pixels that exceed a threshold. The goal is to binarize the image, that is, to have two possible values. All those that exceed the threshold will be white pixels and those that do not exceed it will be black pixels. This will help us to select the moving object. In OpenCV there is a method to apply a threshold.

Outline or blob detection

Once we have the image with black or white pixels, we have to detect the outlines or blobs. A blob is a set of pixels that are connected to each other, that is, it has neighbors with the same value. When we talk about neighbors, it is that they are next door. 

Code: Basic algorithm of motion detection with OpenCV and Python

A screenshot of a cell phone

Description automatically generated
A screenshot of a social media post

Description automatically generated
Share
Picture of Dr. Raul V. Rodriguez

Dr. Raul V. Rodriguez

Dean at Woxsen School of Business. He is a registered expert in Artificial intelligence, Intelligent Systems, Multi-agent Systems at the European Commission, and has been nominated for the Forbes 30 Under 30 Europe 2020 list.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.