Background Subtraction in Videos with Python and OpenCV
In this tutorial, we will learn how to perform background subtraction in videos using Python and OpenCV. Background subtraction is a widely used method to detect and track moving objects in real-time applications, such as surveillance, traffic monitoring, and human activity recognition.
Prerequisites
To follow this tutorial, you should have Python 3 installed on your system and be familiar with OpenCV basics. Also, make sure you have the following Python libraries installed:
- OpenCV (
pip install opencv-python
) - NumPy (
pip install numpy
)
Background Subtraction Methods in OpenCV
OpenCV provides multiple background subtraction algorithms to choose from, such as:
-
MOG (Mixture of Gaussians): This method uses a mixture of Gaussian distributions to model the background. It is based on the paper by Stauffer and Grimson (1999).
-
MOG2 (Mixture of Gaussians 2): This is an improved version of MOG, which uses more advanced techniques to adapt to changes in the scene. It was introduced by Zivkovic (2004).
-
KNN (K-Nearest Neighbors): This method is based on the KNN algorithm for background subtraction. It was introduced by Zivkovic and van der Heijden (2006).
In this tutorial, we will use the MOG2 method. However, you can experiment with other methods as well.
Performing Background Subtraction
Let's start by importing the required libraries:
import cv2
import numpy as np
Next, we need to create a VideoCapture
object to read the video frames:
video = cv2.VideoCapture("path/to/your/video")
Create the background subtractor object using the MOG2 method:
background_subtractor = cv2.createBackgroundSubtractorMOG2()
Now, we will process each frame of the video and apply the background subtraction:
while True:
ret, frame = video.read()
if not ret:
break
# Apply the background subtraction
fg_mask = background_subtractor.apply(frame)
# Display the original frame and the foreground mask
cv2.imshow("Frame", frame)
cv2.imshow("Foreground Mask", fg_mask)
# Exit the loop if the user presses the 'q' key
if cv2.waitKey(30) & 0xFF == ord("q"):
break
# Release the video capture object and close the windows
video.release()
cv2.destroyAllWindows()
The above code will display the original video and the foreground mask side by side. The foreground mask is a binary image, where the white pixels represent the moving objects, and the black pixels represent the background.
Improving the Foreground Mask
To improve the foreground mask and reduce noise, you can apply morphological operations such as erosion and dilation. Here's an example using the cv2.erode()
and cv2.dilate()
functions:
kernel = np.ones((5, 5), np.uint8)
# Apply erosion and dilation to reduce noise
fg_mask = cv2.erode(fg_mask, kernel, iterations=1)
fg_mask = cv2.dilate(fg_mask, kernel, iterations=2)
Add the above code snippet right after the fg_mask = background_subtractor.apply(frame)
line in the while loop.
Now, you should have a better foreground mask, with reduced noise and more accurate object detection.
Conclusion
In this tutorial, we learned how to perform background subtraction in videos using Python and OpenCV. We discussed different background subtraction methods provided by OpenCV, and we used the MOG2 method to detect and track moving objects in a video. Additionally, we improved the foreground mask by applying morphological operations. Experiment with different background subtraction methods and parameters to achieve the best results for your specific application.