Update background subtraction tutorial with Java and Python codes.

pull/12967/head
catree 6 years ago
parent defeda2f70
commit 4bea70a64a
  1. 257
      doc/tutorials/video/background_subtraction/background_subtraction.markdown
  2. BIN
      doc/tutorials/video/background_subtraction/images/Background_Subtraction_Tutorial_Result_1.png
  3. BIN
      doc/tutorials/video/background_subtraction/images/Background_Subtraction_Tutorial_Result_2.png
  4. BIN
      doc/tutorials/video/background_subtraction/images/Background_Subtraction_Tutorial_frame.jpg
  5. BIN
      doc/tutorials/video/background_subtraction/images/Background_Subtraction_Tutorial_result_KNN.jpg
  6. BIN
      doc/tutorials/video/background_subtraction/images/Background_Subtraction_Tutorial_result_MOG2.jpg
  7. 2
      doc/tutorials/video/table_of_content_video.markdown
  8. 200
      samples/cpp/tutorial_code/video/bg_sub.cpp
  9. 79
      samples/java/tutorial_code/video/background_subtraction/BackgroundSubtractionDemo.java
  10. 51
      samples/python/tutorial_code/video/background_subtraction/bg_sub.py

@ -19,190 +19,149 @@ How to Use Background Subtraction Methods {#tutorial_background_subtraction}
In the first step, an initial model of the background is computed, while in the second step that In the first step, an initial model of the background is computed, while in the second step that
model is updated in order to adapt to possible changes in the scene. model is updated in order to adapt to possible changes in the scene.
- In this tutorial we will learn how to perform BS by using OpenCV. As input, we will use data - In this tutorial we will learn how to perform BS by using OpenCV.
coming from the publicly available data set [Background Models Challenge
(BMC)](http://bmc.univ-bpclermont.fr/) .
Goals Goals
----- -----
In this tutorial you will learn how to: In this tutorial you will learn how to:
-# Read data from videos by using @ref cv::VideoCapture or image sequences by using @ref -# Read data from videos or image sequences by using @ref cv::VideoCapture ;
cv::imread ;
-# Create and update the background model by using @ref cv::BackgroundSubtractor class; -# Create and update the background model by using @ref cv::BackgroundSubtractor class;
-# Get and show the foreground mask by using @ref cv::imshow ; -# Get and show the foreground mask by using @ref cv::imshow ;
-# Save the output by using @ref cv::imwrite to quantitatively evaluate the results.
Code Code
---- ----
In the following you can find the source code. We will let the user chose to process either a video In the following you can find the source code. We will let the user choose to process either a video
file or a sequence of images. file or a sequence of images.
We will use @ref cv::BackgroundSubtractorMOG2 in this sample, to generate the foreground mask. We will use @ref cv::BackgroundSubtractorMOG2 in this sample, to generate the foreground mask.
The results as well as the input data are shown on the screen. The results as well as the input data are shown on the screen.
The source file can be downloaded [here ](https://github.com/opencv/opencv/tree/3.4/samples/cpp/tutorial_code/video/bg_sub.cpp).
@include samples/cpp/tutorial_code/video/bg_sub.cpp @add_toggle_cpp
- **Downloadable code**: Click
[here](https://github.com/opencv/opencv/tree/3.4/samples/cpp/tutorial_code/video/bg_sub.cpp)
- **Code at glance:**
@include samples/cpp/tutorial_code/video/bg_sub.cpp
@end_toggle
@add_toggle_java
- **Downloadable code**: Click
[here](https://github.com/opencv/opencv/tree/3.4/samples/java/tutorial_code/video/background_subtraction/BackgroundSubtractionDemo.java)
- **Code at glance:**
@include samples/java/tutorial_code/video/background_subtraction/BackgroundSubtractionDemo.java
@end_toggle
@add_toggle_python
- **Downloadable code**: Click
[here](https://github.com/opencv/opencv/tree/3.4/samples/python/tutorial_code/video/background_subtraction/bg_sub.py)
- **Code at glance:**
@include samples/python/tutorial_code/video/background_subtraction/bg_sub.py
@end_toggle
Explanation Explanation
----------- -----------
We discuss the main parts of the above code: We discuss the main parts of the code above:
-# First, two Mat objects are allocated to store the current frame and two foreground masks, - A @ref cv::BackgroundSubtractor object will be used to generate the foreground mask. In this
obtained by using two different BS algorithms.
@code{.cpp}
Mat frame; //current frame
Mat fgMaskMOG2; //fg mask fg mask generated by MOG2 method
@endcode
-# A @ref cv::BackgroundSubtractor object will be used to generate the foreground mask. In this
example, default parameters are used, but it is also possible to declare specific parameters in example, default parameters are used, but it is also possible to declare specific parameters in
the create function. the create function.
@code{.cpp}
Ptr<BackgroundSubtractor> pMOG2; //MOG2 Background subtractor @add_toggle_cpp
... @snippet samples/cpp/tutorial_code/video/bg_sub.cpp create
//create Background Subtractor object @end_toggle
pMOG2 = createBackgroundSubtractorMOG2(); //MOG2 approach
@endcode @add_toggle_java
-# The command line arguments are analysed. The user can chose between two options: @snippet samples/java/tutorial_code/video/background_subtraction/BackgroundSubtractionDemo.java create
- video files (by choosing the option -vid); @end_toggle
- image sequences (by choosing the option -img).
@code{.cpp} @add_toggle_python
if(strcmp(argv[1], "-vid") == 0) { @snippet samples/python/tutorial_code/video/background_subtraction/bg_sub.py create
//input data coming from a video @end_toggle
processVideo(argv[2]);
} - A @ref cv::VideoCapture object is used to read the input video or input images sequence.
else if(strcmp(argv[1], "-img") == 0) {
//input data coming from a sequence of images @add_toggle_cpp
processImages(argv[2]); @snippet samples/cpp/tutorial_code/video/bg_sub.cpp capture
} @end_toggle
@endcode
-# Suppose you want to process a video file. The video is read until the end is reached or the user @add_toggle_java
presses the button 'q' or the button 'ESC'. @snippet samples/java/tutorial_code/video/background_subtraction/BackgroundSubtractionDemo.java capture
@code{.cpp} @end_toggle
while( (char)keyboard != 'q' && (char)keyboard != 27 ){
//read the current frame @add_toggle_python
if(!capture.read(frame)) { @snippet samples/python/tutorial_code/video/background_subtraction/bg_sub.py capture
cerr << "Unable to read next frame." << endl; @end_toggle
cerr << "Exiting..." << endl;
exit(EXIT_FAILURE); - Every frame is used both for calculating the foreground mask and for updating the background. If
}
@endcode
-# Every frame is used both for calculating the foreground mask and for updating the background. If
you want to change the learning rate used for updating the background model, it is possible to you want to change the learning rate used for updating the background model, it is possible to
set a specific learning rate by passing a third parameter to the 'apply' method. set a specific learning rate by passing a parameter to the `apply` method.
@code{.cpp}
//update the background model @add_toggle_cpp
pMOG2->apply(frame, fgMaskMOG2); @snippet samples/cpp/tutorial_code/video/bg_sub.cpp apply
@endcode @end_toggle
-# The current frame number can be extracted from the @ref cv::VideoCapture object and stamped in
@add_toggle_java
@snippet samples/java/tutorial_code/video/background_subtraction/BackgroundSubtractionDemo.java apply
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/video/background_subtraction/bg_sub.py apply
@end_toggle
- The current frame number can be extracted from the @ref cv::VideoCapture object and stamped in
the top left corner of the current frame. A white rectangle is used to highlight the black the top left corner of the current frame. A white rectangle is used to highlight the black
colored frame number. colored frame number.
@code{.cpp}
//get the frame number and write it on the current frame @add_toggle_cpp
stringstream ss; @snippet samples/cpp/tutorial_code/video/bg_sub.cpp display_frame_number
rectangle(frame, cv::Point(10, 2), cv::Point(100,20), @end_toggle
cv::Scalar(255,255,255), -1);
ss << capture.get(CAP_PROP_POS_FRAMES); @add_toggle_java
string frameNumberString = ss.str(); @snippet samples/java/tutorial_code/video/background_subtraction/BackgroundSubtractionDemo.java display_frame_number
putText(frame, frameNumberString.c_str(), cv::Point(15, 15), @end_toggle
FONT_HERSHEY_SIMPLEX, 0.5 , cv::Scalar(0,0,0));
@endcode @add_toggle_python
-# We are ready to show the current input frame and the results. @snippet samples/python/tutorial_code/video/background_subtraction/bg_sub.py display_frame_number
@code{.cpp} @end_toggle
//show the current frame and the fg masks
imshow("Frame", frame); - We are ready to show the current input frame and the results.
imshow("FG Mask MOG 2", fgMaskMOG2);
@endcode @add_toggle_cpp
-# The same operations listed above can be performed using a sequence of images as input. The @snippet samples/cpp/tutorial_code/video/bg_sub.cpp show
processImage function is called and, instead of using a @ref cv::VideoCapture object, the images @end_toggle
are read by using @ref cv::imread , after individuating the correct path for the next frame to
read. @add_toggle_java
@code{.cpp} @snippet samples/java/tutorial_code/video/background_subtraction/BackgroundSubtractionDemo.java show
//read the first file of the sequence @end_toggle
frame = imread(fistFrameFilename);
if(!frame.data){ @add_toggle_python
//error in opening the first image @snippet samples/python/tutorial_code/video/background_subtraction/bg_sub.py show
cerr << "Unable to open first image frame: " << fistFrameFilename << endl; @end_toggle
exit(EXIT_FAILURE);
}
...
//search for the next image in the sequence
ostringstream oss;
oss << (frameNumber + 1);
string nextFrameNumberString = oss.str();
string nextFrameFilename = prefix + nextFrameNumberString + suffix;
//read the next frame
frame = imread(nextFrameFilename);
if(!frame.data){
//error in opening the next image in the sequence
cerr << "Unable to open image frame: " << nextFrameFilename << endl;
exit(EXIT_FAILURE);
}
//update the path of the current frame
fn.assign(nextFrameFilename);
@endcode
Note that this example works only on image sequences in which the filename format is \<n\>.png,
where n is the frame number (e.g., 7.png).
Results Results
------- -------
- Given the following input parameters: - With the `vtest.avi` video, for the following frame:
@code{.cpp}
-vid Video_001.avi ![](images/Background_Subtraction_Tutorial_frame.jpg)
@endcode
The output of the program will look as the following: The output of the program will look as the following for MOG2 method (gray areas are detected shadows):
![](images/Background_Subtraction_Tutorial_Result_1.png) ![](images/Background_Subtraction_Tutorial_result_MOG2.jpg)
- The video file Video_001.avi is part of the [Background Models Challenge The output of the program will look as the following for the KNN method (gray areas are detected shadows):
(BMC)](http://bmc.univ-bpclermont.fr/) data set and it can be downloaded from the following link
[Video_001](http://bmc.univ-bpclermont.fr/sites/default/files/videos/evaluation/Video_001.zip)
(about 32 MB).
- If you want to process a sequence of images, then the '-img' option has to be chosen:
@code{.cpp}
-img 111_png/input/1.png
@endcode
The output of the program will look as the following:
![](images/Background_Subtraction_Tutorial_Result_2.png)
- The sequence of images used in this example is part of the [Background Models Challenge
(BMC)](http://bmc.univ-bpclermont.fr/) dataset and it can be downloaded from the following link
[sequence 111](http://bmc.univ-bpclermont.fr/sites/default/files/videos/learning/111_png.zip)
(about 708 MB). Please, note that this example works only on sequences in which the filename
format is \<n\>.png, where n is the frame number (e.g., 7.png).
Evaluation
----------
To quantitatively evaluate the results obtained, we need to: ![](images/Background_Subtraction_Tutorial_result_KNN.jpg)
- Save the output images;
- Have the ground truth images for the chosen sequence.
In order to save the output images, we can use @ref cv::imwrite . Adding the following code allows
for saving the foreground masks.
@code{.cpp}
string imageToSave = "output_MOG_" + frameNumberString + ".png";
bool saved = imwrite(imageToSave, fgMaskMOG);
if(!saved) {
cerr << "Unable to save " << imageToSave << endl;
}
@endcode
Once we have collected the result images, we can compare them with the ground truth data. There
exist several publicly available sequences for background subtraction that come with ground truth
data. If you decide to use the [Background Models Challenge (BMC)](http://bmc.univ-bpclermont.fr/),
then the result images can be used as input for the [BMC
Wizard](http://bmc.univ-bpclermont.fr/?q=node/7). The wizard can compute different measures about
the accuracy of the results.
References References
---------- ----------
- [Background Models Challenge (BMC) website](http://bmc.univ-bpclermont.fr/) - [Background Models Challenge (BMC) website](https://web.archive.org/web/20140418093037/http://bmc.univ-bpclermont.fr/)
- A Benchmark Dataset for Foreground/Background Extraction @cite vacavant2013benchmark - A Benchmark Dataset for Foreground/Background Extraction @cite vacavant2013benchmark

Binary file not shown.

After

Width:  |  Height:  |  Size: 91 KiB

@ -6,6 +6,8 @@ tracking and foreground extractions.
- @subpage tutorial_background_subtraction - @subpage tutorial_background_subtraction
*Languages:* C++, Java, Python
*Compatibility:* \> OpenCV 2.4.6 *Compatibility:* \> OpenCV 2.4.6
*Author:* Domenico Daniele Bloisi *Author:* Domenico Daniele Bloisi

@ -4,180 +4,84 @@
* @author Domenico D. Bloisi * @author Domenico D. Bloisi
*/ */
//opencv
#include "opencv2/imgcodecs.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/videoio.hpp"
#include <opencv2/highgui.hpp>
#include <opencv2/video.hpp>
//C
#include <stdio.h>
//C++
#include <iostream> #include <iostream>
#include <sstream> #include <sstream>
#include <opencv2/imgcodecs.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/videoio.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/video.hpp>
using namespace cv; using namespace cv;
using namespace std; using namespace std;
// Global variables const char* params
Mat frame; //current frame = "{ help h | | Print usage }"
Mat fgMaskMOG2; //fg mask fg mask generated by MOG2 method "{ input | ../data/vtest.avi | Path to a video or a sequence of image }"
Ptr<BackgroundSubtractor> pMOG2; //MOG2 Background subtractor "{ algo | MOG2 | Background subtraction method (KNN, MOG2) }";
char keyboard; //input from keyboard
/** Function Headers */
void help();
void processVideo(char* videoFilename);
void processImages(char* firstFrameFilename);
void help()
{
cout
<< "--------------------------------------------------------------------------" << endl
<< "This program shows how to use background subtraction methods provided by " << endl
<< " OpenCV. You can process both videos (-vid) and images (-img)." << endl
<< endl
<< "Usage:" << endl
<< "./bg_sub {-vid <video filename>|-img <image filename>}" << endl
<< "for example: ./bg_sub -vid video.avi" << endl
<< "or: ./bg_sub -img /data/images/1.png" << endl
<< "--------------------------------------------------------------------------" << endl
<< endl;
}
/**
* @function main
*/
int main(int argc, char* argv[]) int main(int argc, char* argv[])
{ {
//print help information CommandLineParser parser(argc, argv, params);
help(); parser.about( "This program shows how to use background subtraction methods provided by "
" OpenCV. You can process both videos and images.\n" );
//check for the input parameter correctness if (parser.has("help"))
if(argc != 3) { {
cerr <<"Incorret input list" << endl; //print help information
cerr <<"exiting..." << endl; parser.printMessage();
return EXIT_FAILURE;
} }
//create GUI windows //! [create]
namedWindow("Frame");
namedWindow("FG Mask MOG 2");
//create Background Subtractor objects //create Background Subtractor objects
pMOG2 = createBackgroundSubtractorMOG2(); //MOG2 approach Ptr<BackgroundSubtractor> pBackSub;
if (parser.get<String>("algo") == "MOG2")
if(strcmp(argv[1], "-vid") == 0) { pBackSub = createBackgroundSubtractorMOG2();
//input data coming from a video else
processVideo(argv[2]); pBackSub = createBackgroundSubtractorKNN();
} //! [create]
else if(strcmp(argv[1], "-img") == 0) {
//input data coming from a sequence of images
processImages(argv[2]);
}
else {
//error in reading input parameters
cerr <<"Please, check the input parameters." << endl;
cerr <<"Exiting..." << endl;
return EXIT_FAILURE;
}
//destroy GUI windows
destroyAllWindows();
return EXIT_SUCCESS;
}
/** //! [capture]
* @function processVideo VideoCapture capture(parser.get<String>("input"));
*/ if (!capture.isOpened()){
void processVideo(char* videoFilename) {
//create the capture object
VideoCapture capture(videoFilename);
if(!capture.isOpened()){
//error in opening the video input //error in opening the video input
cerr << "Unable to open video file: " << videoFilename << endl; cerr << "Unable to open: " << parser.get<String>("input") << endl;
exit(EXIT_FAILURE); return 0;
} }
//read input data. ESC or 'q' for quitting //! [capture]
keyboard = 0;
while( keyboard != 'q' && keyboard != 27 ){ Mat frame, fgMask;
//read the current frame while (true) {
if(!capture.read(frame)) { capture >> frame;
cerr << "Unable to read next frame." << endl; if (frame.empty())
cerr << "Exiting..." << endl; break;
exit(EXIT_FAILURE);
} //! [apply]
//update the background model //update the background model
pMOG2->apply(frame, fgMaskMOG2); pBackSub->apply(frame, fgMask);
//! [apply]
//! [display_frame_number]
//get the frame number and write it on the current frame //get the frame number and write it on the current frame
stringstream ss;
rectangle(frame, cv::Point(10, 2), cv::Point(100,20), rectangle(frame, cv::Point(10, 2), cv::Point(100,20),
cv::Scalar(255,255,255), -1); cv::Scalar(255,255,255), -1);
stringstream ss;
ss << capture.get(CAP_PROP_POS_FRAMES); ss << capture.get(CAP_PROP_POS_FRAMES);
string frameNumberString = ss.str(); string frameNumberString = ss.str();
putText(frame, frameNumberString.c_str(), cv::Point(15, 15), putText(frame, frameNumberString.c_str(), cv::Point(15, 15),
FONT_HERSHEY_SIMPLEX, 0.5 , cv::Scalar(0,0,0)); FONT_HERSHEY_SIMPLEX, 0.5 , cv::Scalar(0,0,0));
//show the current frame and the fg masks //! [display_frame_number]
imshow("Frame", frame);
imshow("FG Mask MOG 2", fgMaskMOG2);
//get the input from the keyboard
keyboard = (char)waitKey( 30 );
}
//delete capture object
capture.release();
}
/** //! [show]
* @function processImages
*/
void processImages(char* fistFrameFilename) {
//read the first file of the sequence
frame = imread(fistFrameFilename);
if(frame.empty()){
//error in opening the first image
cerr << "Unable to open first image frame: " << fistFrameFilename << endl;
exit(EXIT_FAILURE);
}
//current image filename
string fn(fistFrameFilename);
//read input data. ESC or 'q' for quitting
keyboard = 0;
while( keyboard != 'q' && keyboard != 27 ){
//update the background model
pMOG2->apply(frame, fgMaskMOG2);
//get the frame number and write it on the current frame
size_t index = fn.find_last_of("/");
if(index == string::npos) {
index = fn.find_last_of("\\");
}
size_t index2 = fn.find_last_of(".");
string prefix = fn.substr(0,index+1);
string suffix = fn.substr(index2);
string frameNumberString = fn.substr(index+1, index2-index-1);
istringstream iss(frameNumberString);
int frameNumber = 0;
iss >> frameNumber;
rectangle(frame, cv::Point(10, 2), cv::Point(100,20),
cv::Scalar(255,255,255), -1);
putText(frame, frameNumberString.c_str(), cv::Point(15, 15),
FONT_HERSHEY_SIMPLEX, 0.5 , cv::Scalar(0,0,0));
//show the current frame and the fg masks //show the current frame and the fg masks
imshow("Frame", frame); imshow("Frame", frame);
imshow("FG Mask MOG 2", fgMaskMOG2); imshow("FG Mask", fgMask);
//! [show]
//get the input from the keyboard //get the input from the keyboard
keyboard = (char)waitKey( 30 ); int keyboard = waitKey(30);
//search for the next image in the sequence if (keyboard == 'q' || keyboard == 27)
ostringstream oss; break;
oss << (frameNumber + 1);
string nextFrameNumberString = oss.str();
string nextFrameFilename = prefix + nextFrameNumberString + suffix;
//read the next frame
frame = imread(nextFrameFilename);
if(frame.empty()){
//error in opening the next image in the sequence
cerr << "Unable to open image frame: " << nextFrameFilename << endl;
exit(EXIT_FAILURE);
}
//update the path of the current frame
fn.assign(nextFrameFilename);
} }
return 0;
} }

@ -0,0 +1,79 @@
import org.opencv.core.Core;
import org.opencv.core.Mat;
import org.opencv.core.Point;
import org.opencv.core.Scalar;
import org.opencv.highgui.HighGui;
import org.opencv.imgproc.Imgproc;
import org.opencv.video.BackgroundSubtractor;
import org.opencv.video.Video;
import org.opencv.videoio.VideoCapture;
import org.opencv.videoio.Videoio;
class BackgroundSubtraction {
public void run(String[] args) {
String input = args.length > 0 ? args[0] : "../data/vtest.avi";
boolean useMOG2 = args.length > 1 ? args[1] == "MOG2" : true;
//! [create]
BackgroundSubtractor backSub;
if (useMOG2) {
backSub = Video.createBackgroundSubtractorMOG2();
} else {
backSub = Video.createBackgroundSubtractorKNN();
}
//! [create]
//! [capture]
VideoCapture capture = new VideoCapture(input);
if (!capture.isOpened()) {
System.err.println("Unable to open: " + input);
System.exit(0);
}
//! [capture]
Mat frame = new Mat(), fgMask = new Mat();
while (true) {
capture.read(frame);
if (frame.empty()) {
break;
}
//! [apply]
// update the background model
backSub.apply(frame, fgMask);
//! [apply]
//! [display_frame_number]
// get the frame number and write it on the current frame
Imgproc.rectangle(frame, new Point(10, 2), new Point(100, 20), new Scalar(255, 255, 255), -1);
String frameNumberString = String.format("%d", (int)capture.get(Videoio.CAP_PROP_POS_FRAMES));
Imgproc.putText(frame, frameNumberString, new Point(15, 15), Core.FONT_HERSHEY_SIMPLEX, 0.5,
new Scalar(0, 0, 0));
//! [display_frame_number]
//! [show]
// show the current frame and the fg masks
HighGui.imshow("Frame", frame);
HighGui.imshow("FG Mask", fgMask);
//! [show]
// get the input from the keyboard
int keyboard = HighGui.waitKey(30);
if (keyboard == 'q' || keyboard == 27) {
break;
}
}
HighGui.waitKey();
System.exit(0);
}
}
public class BackgroundSubtractionDemo {
public static void main(String[] args) {
// Load the native OpenCV library
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
new BackgroundSubtraction().run(args);
}
}

@ -0,0 +1,51 @@
from __future__ import print_function
import cv2 as cv
import argparse
parser = argparse.ArgumentParser(description='This program shows how to use background subtraction methods provided by \
OpenCV. You can process both videos and images.')
parser.add_argument('--input', type=str, help='Path to a video or a sequence of image.', default='../data/vtest.avi')
parser.add_argument('--algo', type=str, help='Background subtraction method (KNN, MOG2).', default='MOG2')
args = parser.parse_args()
## [create]
#create Background Subtractor objects
if args.algo == 'MOG2':
backSub = cv.createBackgroundSubtractorMOG2()
else:
backSub = cv.createBackgroundSubtractorKNN()
## [create]
## [capture]
capture = cv.VideoCapture(args.input)
if not capture.isOpened:
print('Unable to open: ' + args.input)
exit(0)
## [capture]
while True:
ret, frame = capture.read()
if frame is None:
break
## [apply]
#update the background model
fgMask = backSub.apply(frame)
## [apply]
## [display_frame_number]
#get the frame number and write it on the current frame
cv.rectangle(frame, (10, 2), (100,20), (255,255,255), -1)
cv.putText(frame, str(capture.get(cv.CAP_PROP_POS_FRAMES)), (15, 15),
cv.FONT_HERSHEY_SIMPLEX, 0.5 , (0,0,0))
## [display_frame_number]
## [show]
#show the current frame and the fg masks
cv.imshow('Frame', frame)
cv.imshow('FG Mask', fgMask)
## [show]
keyboard = cv.waitKey(30)
if keyboard == 'q' or keyboard == 27:
break
Loading…
Cancel
Save