|
|
|
How to Use Background Subtraction Methods {#tutorial_background_subtraction}
|
|
|
|
=========================================
|
|
|
|
|
|
|
|
- Background subtraction (BS) is a common and widely used technique for generating a foreground
|
|
|
|
mask (namely, a binary image containing the pixels belonging to moving objects in the scene) by
|
|
|
|
using static cameras.
|
|
|
|
- As the name suggests, BS calculates the foreground mask performing a subtraction between the
|
|
|
|
current frame and a background model, containing the static part of the scene or, more in
|
|
|
|
general, everything that can be considered as background given the characteristics of the
|
|
|
|
observed scene.
|
|
|
|
|
|
|
|
![](images/Background_Subtraction_Tutorial_Scheme.png)
|
|
|
|
|
|
|
|
- Background modeling consists of two main steps:
|
|
|
|
|
|
|
|
-# Background Initialization;
|
|
|
|
-# Background Update.
|
|
|
|
|
|
|
|
In the first step, an initial model of the background is computed, while in the second step that
|
|
|
|
model is updated in order to adapt to possible changes in the scene.
|
|
|
|
|
|
|
|
- In this tutorial we will learn how to perform BS by using OpenCV. As input, we will use data
|
|
|
|
coming from the publicly available data set [Background Models Challenge
|
|
|
|
(BMC)](http://bmc.univ-bpclermont.fr/) .
|
|
|
|
|
|
|
|
Goals
|
|
|
|
-----
|
|
|
|
|
|
|
|
In this tutorial you will learn how to:
|
|
|
|
|
|
|
|
-# Read data from videos by using @ref cv::VideoCapture or image sequences by using @ref
|
|
|
|
cv::imread ;
|
|
|
|
-# Create and update the background model by using @ref cv::BackgroundSubtractor class;
|
|
|
|
-# Get and show the foreground mask by using @ref cv::imshow ;
|
|
|
|
-# Save the output by using @ref cv::imwrite to quantitatively evaluate the results.
|
|
|
|
|
|
|
|
Code
|
|
|
|
----
|
|
|
|
|
|
|
|
In the following you can find the source code. We will let the user chose to process either a video
|
|
|
|
file or a sequence of images.
|
|
|
|
|
|
|
|
Two different methods are used to generate two foreground masks:
|
|
|
|
-# cv::bgsegm::BackgroundSubtractorMOG
|
|
|
|
-# @ref cv::BackgroundSubtractorMOG2
|
|
|
|
|
|
|
|
The results as well as the input data are shown on the screen.
|
|
|
|
The source file can be downloaded [here ](https://github.com/opencv/opencv/tree/master/samples/cpp/tutorial_code/video/bg_sub.cpp).
|
|
|
|
|
|
|
|
@include samples/cpp/tutorial_code/video/bg_sub.cpp
|
|
|
|
|
|
|
|
Explanation
|
|
|
|
-----------
|
|
|
|
|
|
|
|
We discuss the main parts of the above code:
|
|
|
|
|
|
|
|
-# First, three Mat objects are allocated to store the current frame and two foreground masks,
|
|
|
|
obtained by using two different BS algorithms.
|
|
|
|
@code{.cpp}
|
|
|
|
Mat frame; //current frame
|
|
|
|
Mat fgMaskMOG; //fg mask generated by MOG method
|
|
|
|
Mat fgMaskMOG2; //fg mask fg mask generated by MOG2 method
|
|
|
|
@endcode
|
|
|
|
-# Two @ref cv::BackgroundSubtractor objects will be used to generate the foreground masks. In this
|
|
|
|
example, default parameters are used, but it is also possible to declare specific parameters in
|
|
|
|
the create function.
|
|
|
|
@code{.cpp}
|
|
|
|
Ptr<BackgroundSubtractor> pMOG; //MOG Background subtractor
|
|
|
|
Ptr<BackgroundSubtractor> pMOG2; //MOG2 Background subtractor
|
|
|
|
...
|
|
|
|
//create Background Subtractor objects
|
|
|
|
pMOG = createBackgroundSubtractorMOG(); //MOG approach
|
|
|
|
pMOG2 = createBackgroundSubtractorMOG2(); //MOG2 approach
|
|
|
|
@endcode
|
|
|
|
-# The command line arguments are analysed. The user can chose between two options:
|
|
|
|
- video files (by choosing the option -vid);
|
|
|
|
- image sequences (by choosing the option -img).
|
|
|
|
@code{.cpp}
|
|
|
|
if(strcmp(argv[1], "-vid") == 0) {
|
|
|
|
//input data coming from a video
|
|
|
|
processVideo(argv[2]);
|
|
|
|
}
|
|
|
|
else if(strcmp(argv[1], "-img") == 0) {
|
|
|
|
//input data coming from a sequence of images
|
|
|
|
processImages(argv[2]);
|
|
|
|
}
|
|
|
|
@endcode
|
|
|
|
-# Suppose you want to process a video file. The video is read until the end is reached or the user
|
|
|
|
presses the button 'q' or the button 'ESC'.
|
|
|
|
@code{.cpp}
|
|
|
|
while( (char)keyboard != 'q' && (char)keyboard != 27 ){
|
|
|
|
//read the current frame
|
|
|
|
if(!capture.read(frame)) {
|
|
|
|
cerr << "Unable to read next frame." << endl;
|
|
|
|
cerr << "Exiting..." << endl;
|
|
|
|
exit(EXIT_FAILURE);
|
|
|
|
}
|
|
|
|
@endcode
|
|
|
|
-# Every frame is used both for calculating the foreground mask and for updating the background. If
|
|
|
|
you want to change the learning rate used for updating the background model, it is possible to
|
|
|
|
set a specific learning rate by passing a third parameter to the 'apply' method.
|
|
|
|
@code{.cpp}
|
|
|
|
//update the background model
|
|
|
|
pMOG->apply(frame, fgMaskMOG);
|
|
|
|
pMOG2->apply(frame, fgMaskMOG2);
|
|
|
|
@endcode
|
|
|
|
-# The current frame number can be extracted from the @ref cv::VideoCapture object and stamped in
|
|
|
|
the top left corner of the current frame. A white rectangle is used to highlight the black
|
|
|
|
colored frame number.
|
|
|
|
@code{.cpp}
|
|
|
|
//get the frame number and write it on the current frame
|
|
|
|
stringstream ss;
|
|
|
|
rectangle(frame, cv::Point(10, 2), cv::Point(100,20),
|
|
|
|
cv::Scalar(255,255,255), -1);
|
|
|
|
ss << capture.get(CAP_PROP_POS_FRAMES);
|
|
|
|
string frameNumberString = ss.str();
|
|
|
|
putText(frame, frameNumberString.c_str(), cv::Point(15, 15),
|
|
|
|
FONT_HERSHEY_SIMPLEX, 0.5 , cv::Scalar(0,0,0));
|
|
|
|
@endcode
|
|
|
|
-# We are ready to show the current input frame and the results.
|
|
|
|
@code{.cpp}
|
|
|
|
//show the current frame and the fg masks
|
|
|
|
imshow("Frame", frame);
|
|
|
|
imshow("FG Mask MOG", fgMaskMOG);
|
|
|
|
imshow("FG Mask MOG 2", fgMaskMOG2);
|
|
|
|
@endcode
|
|
|
|
-# The same operations listed above can be performed using a sequence of images as input. The
|
|
|
|
processImage function is called and, instead of using a @ref cv::VideoCapture object, the images
|
|
|
|
are read by using @ref cv::imread , after individuating the correct path for the next frame to
|
|
|
|
read.
|
|
|
|
@code{.cpp}
|
|
|
|
//read the first file of the sequence
|
|
|
|
frame = imread(fistFrameFilename);
|
|
|
|
if(!frame.data){
|
|
|
|
//error in opening the first image
|
|
|
|
cerr << "Unable to open first image frame: " << fistFrameFilename << endl;
|
|
|
|
exit(EXIT_FAILURE);
|
|
|
|
}
|
|
|
|
...
|
|
|
|
//search for the next image in the sequence
|
|
|
|
ostringstream oss;
|
|
|
|
oss << (frameNumber + 1);
|
|
|
|
string nextFrameNumberString = oss.str();
|
|
|
|
string nextFrameFilename = prefix + nextFrameNumberString + suffix;
|
|
|
|
//read the next frame
|
|
|
|
frame = imread(nextFrameFilename);
|
|
|
|
if(!frame.data){
|
|
|
|
//error in opening the next image in the sequence
|
|
|
|
cerr << "Unable to open image frame: " << nextFrameFilename << endl;
|
|
|
|
exit(EXIT_FAILURE);
|
|
|
|
}
|
|
|
|
//update the path of the current frame
|
|
|
|
fn.assign(nextFrameFilename);
|
|
|
|
@endcode
|
|
|
|
Note that this example works only on image sequences in which the filename format is \<n\>.png,
|
|
|
|
where n is the frame number (e.g., 7.png).
|
|
|
|
|
|
|
|
Results
|
|
|
|
-------
|
|
|
|
|
|
|
|
- Given the following input parameters:
|
|
|
|
@code{.cpp}
|
|
|
|
-vid Video_001.avi
|
|
|
|
@endcode
|
|
|
|
The output of the program will look as the following:
|
|
|
|
|
|
|
|
![](images/Background_Subtraction_Tutorial_Result_1.png)
|
|
|
|
|
|
|
|
- The video file Video_001.avi is part of the [Background Models Challenge
|
|
|
|
(BMC)](http://bmc.univ-bpclermont.fr/) data set and it can be downloaded from the following link
|
|
|
|
[Video_001](http://bmc.univ-bpclermont.fr/sites/default/files/videos/evaluation/Video_001.zip)
|
|
|
|
(about 32 MB).
|
|
|
|
- If you want to process a sequence of images, then the '-img' option has to be chosen:
|
|
|
|
@code{.cpp}
|
|
|
|
-img 111_png/input/1.png
|
|
|
|
@endcode
|
|
|
|
The output of the program will look as the following:
|
|
|
|
|
|
|
|
![](images/Background_Subtraction_Tutorial_Result_2.png)
|
|
|
|
|
|
|
|
- The sequence of images used in this example is part of the [Background Models Challenge
|
|
|
|
(BMC)](http://bmc.univ-bpclermont.fr/) dataset and it can be downloaded from the following link
|
|
|
|
[sequence 111](http://bmc.univ-bpclermont.fr/sites/default/files/videos/learning/111_png.zip)
|
|
|
|
(about 708 MB). Please, note that this example works only on sequences in which the filename
|
|
|
|
format is \<n\>.png, where n is the frame number (e.g., 7.png).
|
|
|
|
|
|
|
|
Evaluation
|
|
|
|
----------
|
|
|
|
|
|
|
|
To quantitatively evaluate the results obtained, we need to:
|
|
|
|
|
|
|
|
- Save the output images;
|
|
|
|
- Have the ground truth images for the chosen sequence.
|
|
|
|
|
|
|
|
In order to save the output images, we can use @ref cv::imwrite . Adding the following code allows
|
|
|
|
for saving the foreground masks.
|
|
|
|
@code{.cpp}
|
|
|
|
string imageToSave = "output_MOG_" + frameNumberString + ".png";
|
|
|
|
bool saved = imwrite(imageToSave, fgMaskMOG);
|
|
|
|
if(!saved) {
|
|
|
|
cerr << "Unable to save " << imageToSave << endl;
|
|
|
|
}
|
|
|
|
@endcode
|
|
|
|
Once we have collected the result images, we can compare them with the ground truth data. There
|
|
|
|
exist several publicly available sequences for background subtraction that come with ground truth
|
|
|
|
data. If you decide to use the [Background Models Challenge (BMC)](http://bmc.univ-bpclermont.fr/),
|
|
|
|
then the result images can be used as input for the [BMC
|
|
|
|
Wizard](http://bmc.univ-bpclermont.fr/?q=node/7). The wizard can compute different measures about
|
|
|
|
the accuracy of the results.
|
|
|
|
|
|
|
|
References
|
|
|
|
----------
|
|
|
|
|
|
|
|
- [Background Models Challenge (BMC) website](http://bmc.univ-bpclermont.fr/)
|
|
|
|
- A Benchmark Dataset for Foreground/Background Extraction @cite vacavant2013benchmark
|