applications and can be used within functionally rich UI frameworks (such as Qt, WinForms or Cocoa) or without any UI at all, sometimes there is a need to try some functionality quickly and visualize the results. This is what the HighGUI module has been designed for.
It provides easy interface to:
\begin{itemize}
\item create and manipulate windows that can display images and "remember" their content (no need to handle repaint events from OS)
\item add trackbars to the windows, handle simple mouse events as well as keyboard commmands
\item read and write images to/from disk or memory.
\item read video from camera or file and write video to a file.
\end{itemize}
\ifCPy
\section{User Interface}
\ifC
\cvCPyFunc{ConvertImage}% XXX:TBD
Converts one image to another with an optional vertical flip.
\cvdefC{void cvConvertImage( const CvArr* src, CvArr* dst, int flags=0 );}
\begin{description}
\cvarg{src}{Source image.}
\cvarg{dst}{Destination image. Must be single-channel or 3-channel 8-bit image.}
\cvarg{flags}{The operation flags:
\begin{description}
\cvarg{CV\_CVTIMG\_FLIP}{Flips the image vertically}
\cvarg{CV\_CVTIMG\_SWAP\_RB}{Swaps the red and blue channels. In OpenCV color images have \texttt{BGR} channel order, however on some systems the order needs to be reversed before displaying the image (\cross{ShowImage} does this automatically).}
\end{description}}
\end{description}
The function \texttt{cvConvertImage} converts one image to another and flips the result vertically if desired. The function is used by \cross{ShowImage}.
\fi
\cvCPyFunc{CreateTrackbar}
Creates a trackbar and attaches it to the specified window
\cvdefC{
int cvCreateTrackbar( \par const char* trackbarName, \par const char* windowName,
\par int* value, \par int count, \par CvTrackbarCallback onChange );
\cvarg{trackbarName}{Name of the created trackbar.}
\cvarg{windowName}{Name of the window which will be used as a parent for created trackbar.}
\ifPy
\cvarg{value}{Initial value for the slider position, between 0 and \texttt{count}.}
\else
\cvarg{value}{Pointer to an integer variable, whose value will reflect the position of the slider. Upon creation, the slider position is defined by this variable.}
\fi
\cvarg{count}{Maximal position of the slider. Minimal position is always 0.}
\ifPy
\cvarg{onChange}{
OpenCV calls \texttt{onChange} every time the slider changes position.
OpenCV will call it as \texttt{func(x)} where \texttt{x} is the new position of the slider.}
\else
\cvarg{onChange}{
Pointer to the function to be called every time the slider changes position.
This function should be prototyped as \texttt{void Foo(int);} Can be NULL if callback is not required.}
The function \texttt{cvCreateTrackbar} creates a trackbar (a.k.a. slider or range control) with the specified name and range, assigns a variable to be syncronized with trackbar position and specifies a callback function to be called on trackbar position change. The created trackbar is displayed on the top of the given window.\\\\
\textbf{[Qt Backend Only]} qt-specific details:
\begin{description}
\cvarg{windowName}{Name of the window which will be used as a parent for created trackbar. Can be NULL if the trackbar should be attached to the control panel.}
\end{description}
The created trackbar is displayed at the bottom of the given window if \emph{windowName} is correctly provided, or displayed on the control panel if \emph{windowName} is NULL.
By clicking on the label of each trackbar, it is possible to edit the trackbar's value manually for a more accurate control of it.
The function \texttt{cvGetWindowName} returns the name of the window given its native handle (HWND in case of Win32 and GtkWidget in case of GTK+).\\\\
\textbf{[Qt Backend Only]} qt-specific details:
The function \texttt{cvGetWindowName} returns the name of the window given its native handle (QWidget).
\cvarg{name}{Name of the window in the window caption that may be used as a window identifier.}
\cvarg{flags}{Flags of the window. Currently the only supported flag is \texttt{CV\_WINDOW\_AUTOSIZE}. If this is set, window size is automatically adjusted to fit the displayed image (see \cross{ShowImage}), and the user can not change the window size manually.}
\end{description}
The function \texttt{cvNamedWindow} creates a window which can be used as a placeholder for images and trackbars. Created windows are referred to by their names.
If a window with the same name already exists, the function does nothing.\\\\
\textbf{[Qt Backend Only]} qt-specific details:
\begin{description}
\cvarg{flags}{Flags of the window. Currently the supported flags are:
\begin{description}
\cvarg{CV\_WINDOW\_NORMAL or CV\_WINDOW\_AUTOSIZE:}
{\texttt{CV\_WINDOW\_NORMAL} let the user resize the window, whereas \texttt{CV\_WINDOW\_AUTOSIZE} adjusts automatically the window's size to fit the displayed image (see \cross{ShowImage}), and the user can not change the window size manually.}
\cvarg{CV\_WINDOW\_FREERATIO or CV\_WINDOW\_KEEPRATIO:}
{\texttt{CV\_WINDOW\_FREERATIO} adjust the image without respect the its ration, whereas \texttt{CV\_WINDOW\_KEEPRATIO} keep the image's ratio.}
\cvarg{CV\_GUI\_NORMAL or CV\_GUI\_EXPANDED:}
{\texttt{CV\_GUI\_NORMAL} is the old way to draw the window without statusbar and toolbar, whereas \texttt{CV\_GUI\_EXPANDED} is the new enhance GUI.}
\end{description}
This parameter is optional. The default flags set for a new window are \texttt{CV\_WINDOW\_AUTOSIZE}, \texttt{CV\_WINDOW\_KEEPRATIO}, and \texttt{CV\_GUI\_EXPANDED}.
However, if you want to modify the flags, you can combine them using OR operator, ie:
\cvarg{onMouse}{Pointer to the function to be called every time a mouse event occurs in the specified window. This function should be prototyped as
\texttt{void Foo(int event, int x, int y, int flags, void* param);}
where \texttt{event} is one of \texttt{CV\_EVENT\_*}, \texttt{x} and \texttt{y} are the coordinates of the mouse pointer in image coordinates (not window coordinates), \texttt{flags} is a combination of \texttt{CV\_EVENT\_FLAG\_*}, and \texttt{param} is a user-defined parameter passed to the \texttt{cvSetMouseCallback} function call.}
\else% }{
\cvarg{onMouse}{Callable to be called every time a mouse event occurs in the specified window. This callable should have signature
\texttt{ Foo(event, x, y, flags, param)-> None }
where \texttt{event} is one of \texttt{CV\_EVENT\_*}, \texttt{x} and \texttt{y} are the coordinates of the mouse pointer in image coordinates (not window coordinates), \texttt{flags} is a combination of \texttt{CV\_EVENT\_FLAG\_*}, and \texttt{param} is a user-defined parameter passed to the \texttt{cvSetMouseCallback} function call.}
\fi% }
\cvarg{param}{User-defined parameter to be passed to the callback function.}
\end{description}
The function \texttt{cvSetMouseCallback} sets the callback function for mouse events occuring within the specified window.
The function \texttt{cvShowImage} displays the image in the specified window. If the window was created with the \texttt{CV\_WINDOW\_AUTOSIZE} flag then the image is shown with its original size, otherwise the image is scaled to fit in the window. The function may scale the image, depending on its depth:
\begin{itemize}
\item If the image is 8-bit unsigned, it is displayed as is.
\item If the image is 16-bit unsigned or 32-bit integer, the pixels are divided by 256. That is, the value range [0,255*256] is mapped to [0,255].
\item If the image is 32-bit floating-point, the pixel values are multiplied by 255. That is, the value range [0,1] is mapped to [0,255].
\end{itemize}
\cvCPyFunc{WaitKey}
Waits for a pressed key.
\cvdefC{int cvWaitKey( int delay=0 );}
\cvdefPy{WaitKey(delay=0)-> int}
\begin{description}
\cvarg{delay}{Delay in milliseconds.}
\end{description}
The function \texttt{cvWaitKey} waits for key event infinitely ($\texttt{delay} <=0$) or for \texttt{delay} milliseconds. Returns the code of the pressed key or -1 if no key was pressed before the specified time had elapsed.
\textbf{Note:} This function is the only method in HighGUI that can fetch and handle events, so it needs to be called periodically for normal event processing, unless HighGUI is used within some environment that takes care of event processing.\\\\
\cvarg{iscolor}{Specific color type of the loaded image:
\begin{description}
\cvarg{CV\_LOAD\_IMAGE\_COLOR}{the loaded image is forced to be a 3-channel color image}
\cvarg{CV\_LOAD\_IMAGE\_GRAYSCALE}{the loaded image is forced to be grayscale}
\cvarg{CV\_LOAD\_IMAGE\_UNCHANGED}{the loaded image will be loaded as is.}
\end{description}
}
\end{description}
The function \texttt{cvLoadImage} loads an image from the specified file and returns the pointer to the loaded image. Currently the following file formats are supported:
\begin{itemize}
\item Windows bitmaps - BMP, DIB
\item JPEG files - JPEG, JPG, JPE
\item Portable Network Graphics - PNG
\item Portable image format - PBM, PGM, PPM
\item Sun rasters - SR, RAS
\item TIFF files - TIFF, TIF
\end{itemize}
Note that in the current implementation the alpha channel, if any, is stripped from the output image, e.g. 4-channel RGBA image will be loaded as RGB.
\cvCPyFunc{LoadImageM}
Loads an image from a file as a CvMat.
\cvdefC{
CvMat* cvLoadImageM( \par const char* filename, \par int iscolor=CV\_LOAD\_IMAGE\_COLOR );}
The function \texttt{cvSaveImage} saves the image to the specified file. The image format is chosen based on the \texttt{filename} extension, see \cross{LoadImage}. Only 8-bit single-channel or 3-channel (with 'BGR' channel order) images can be saved using this function. If the format, depth or channel order is different, use \texttt{cvCvtScale} and \texttt{cvCvtColor} to convert it before saving, or use universal \texttt{cvSave} to save the image to XML or YAML format.
\cvclass{CvCapture}\label{CvCapture}
Video capturing structure.
\cvdefC{typedef struct CvCapture CvCapture;}
The structure \texttt{CvCapture} does not have a public interface and is used only as a parameter for video capturing functions.
\cvCPyFunc{CaptureFromCAM}% XXX:Called cvCreateCameraCapture in manual
Initializes capturing a video from a camera.
\cvdefC{CvCapture* cvCaptureFromCAM( int index );}
\cvdefPy{CaptureFromCAM(index) -> CvCapture}
\begin{description}
\cvarg{index}{Index of the camera to be used. If there is only one camera or it does not matter what camera is used -1 may be passed.}
\end{description}
The function \texttt{cvCaptureFromCAM} allocates and initializes the CvCapture structure for reading a video stream from the camera. Currently two camera interfaces can be used on Windows: Video for Windows (VFW) and Matrox Imaging Library (MIL); and two on Linux: V4L and FireWire (IEEE1394).
To release the structure, use \cross{ReleaseCapture}.
\cvCPyFunc{CaptureFromFile}% XXX:Called cvCreateFileCapture in manual
The function \texttt{cvCaptureFromFile} allocates and initializes the CvCapture structure for reading the video stream from the specified file. Which codecs and file formats are supported depends on the back end library. On Windows HighGui uses Video for Windows (VfW), on Linux ffmpeg is used and on Mac OS X the back end is QuickTime. See VideoCodecs for some discussion on what to expect and how to prepare your video files.
After the allocated structure is not used any more it should be released by the \cross{ReleaseCapture} function.
\cvCPyFunc{GetCaptureProperty}
Gets video capturing properties.
\cvdefC{double cvGetCaptureProperty( CvCapture* capture, int property\_id );}
The function \texttt{cvGetCaptureProperty} retrieves the specified property of the camera or video file.
\cvCPyFunc{GrabFrame}
Grabs the frame from a camera or file.
\cvdefC{int cvGrabFrame( CvCapture* capture );}
\cvdefPy{GrabFrame(capture) -> int}
\begin{description}
\cvarg{capture}{video capturing structure.}
\end{description}
The function \texttt{cvGrabFrame} grabs the frame from a camera or file. The grabbed frame is stored internally. The purpose of this function is to grab the frame \emph{quickly} so that syncronization can occur if it has to read from several cameras simultaneously. The grabbed frames are not exposed because they may be stored in a compressed format (as defined by the camera/driver). To retrieve the grabbed frame, \cross{RetrieveFrame} should be used.
The function \texttt{cvQueryFrame} grabs a frame from a camera or video file, decompresses it and returns it. This function is just a combination of \cross{GrabFrame} and \cross{RetrieveFrame}, but in one call. The returned image should not be released or modified by the user. In the event of an error, the return value may be NULL.
The function \texttt{cvRetrieveFrame} returns the pointer to the image grabbed with the \cross{GrabFrame} function. The returned image should not be released or modified by the user. In the event of an error, the return value may be NULL.
\cvCPyFunc{SetCaptureProperty}
Sets video capturing properties.
\cvdefC{int cvSetCaptureProperty( \par CvCapture* capture, \par int property\_id, \par double value );}
The function \texttt{cvSetCaptureProperty} sets the specified property of video capturing. Currently the function supports only video files: \texttt{CV\_CAP\_PROP\_POS\_MSEC, CV\_CAP\_PROP\_POS\_FRAMES, CV\_CAP\_PROP\_POS\_AVI\_RATIO}.
NB This function currently does nothing when using the latest CVS download on linux with FFMPEG (the function contents are hidden if 0 is used and returned).
\cvCPyFunc{CreateVideoWriter}% XXX Different than manual
Creates the video file writer.
\cvdefC{
typedef struct CvVideoWriter CvVideoWriter;
CvVideoWriter* cvCreateVideoWriter( \par const char* filename, \par int fourcc, \par double fps, \par CvSize frame\_size, \par int is\_color=1 );
\cvarg{fourcc}{4-character code of codec used to compress the frames. For example,
\texttt{CV\_FOURCC('P','I','M,'1')} is a MPEG-1 codec,
\texttt{CV\_FOURCC('M','J','P','G')} is a motion-jpeg codec etc.
Under Win32 it is possible to pass -1 in order to choose compression method and additional compression parameters from dialog. Under Win32 if 0 is passed while using an avi filename it will create a video writer that creates an uncompressed avi file.}
\cvarg{fps}{Framerate of the created video stream.}
\cvarg{frame\_size}{Size of the video frames.}
\cvarg{is\_color}{If it is not zero, the encoder will expect and encode color frames, otherwise it will work with grayscale frames (the flag is currently supported on Windows only).}
\end{description}
The function \texttt{cvCreateVideoWriter} creates the video writer structure.
Which codecs and file formats are supported depends on the back end library. On Windows HighGui uses Video for Windows (VfW), on Linux ffmpeg is used and on Mac OS X the back end is QuickTime. See VideoCodecs for some discussion on what to expect.
\cvarg{trackbarname}{Name of the created trackbar.}
\cvarg{winname}{Name of the window which will be used as a parent of the created trackbar.}
\cvarg{value}{The optional pointer to an integer variable, whose value will reflect the position of the slider. Upon creation, the slider position is defined by this variable.}
\cvarg{count}{The maximal position of the slider. The minimal position is always 0.}
\cvarg{onChange}{Pointer to the function to be called every time the slider changes position. This function should be prototyped as \texttt{void Foo(int,void*);}, where the first parameter is the trackbar position and the second parameter is the user data (see the next parameter). If the callback is NULL pointer, then no callbacks is called, but only \texttt{value} is updated}
\cvarg{userdata}{The user data that is passed as-is to the callback; it can be used to handle trackbar events without using global variables}
The function \texttt{createTrackbar} creates a trackbar (a.k.a. slider or range control) with the specified name and range, assigns a variable \texttt{value} to be syncronized with trackbar position and specifies a callback function \texttt{onChange} to be called on the trackbar position change. The created trackbar is displayed on the top of the given window.\\\\
\textbf{[Qt Backend Only]} qt-specific details:
\begin{description}
\cvarg{winname}{Name of the window which will be used as a parent for created trackbar. Can be NULL if the trackbar should be attached to the control panel.}
\end{description}
The created trackbar is displayed at the bottom of the given window if \emph{winname} is correctly provided, or displayed on the control panel if \emph{winname} is NULL.
By clicking on the label of each trackbar, it is possible to edit the trackbar's value manually for a more accurate control of it.
The function \texttt{imshow} displays the image in the specified window. If the window was created with the \texttt{CV\_WINDOW\_AUTOSIZE} flag then the image is shown with its original size, otherwise the image is scaled to fit in the window. The function may scale the image, depending on its depth:
\begin{itemize}
\item If the image is 8-bit unsigned, it is displayed as is.
\item If the image is 16-bit unsigned or 32-bit integer, the pixels are divided by 256. That is, the value range [0,255*256] is mapped to [0,255].
\item If the image is 32-bit floating-point, the pixel values are multiplied by 255. That is, the value range [0,1] is mapped to [0,255].
\end{itemize}
\cvCppFunc{namedWindow}
Creates a window.
\cvdefCpp{void namedWindow( const string\& winname, \par int flags );}
\begin{description}
\cvarg{name}{Name of the window in the window caption that may be used as a window identifier.}
\cvarg{flags}{Flags of the window. Currently the only supported flag is \texttt{CV\_WINDOW\_AUTOSIZE}. If this is set, the window size is automatically adjusted to fit the displayed image (see \cross{imshow}), and the user can not change the window size manually.}
\end{description}
The function \texttt{namedWindow} creates a window which can be used as a placeholder for images and trackbars. Created windows are referred to by their names.
If a window with the same name already exists, the function does nothing.\\\\
\textbf{[Qt Backend Only]} qt-specific details:
\begin{description}
\cvarg{flags}{Flags of the window. Currently the supported flags are:
\begin{description}
\cvarg{CV\_WINDOW\_NORMAL or CV\_WINDOW\_AUTOSIZE:}
{\texttt{CV\_WINDOW\_NORMAL} let the user resize the window, whereas \texttt{CV\_WINDOW\_AUTOSIZE} adjusts automatically the window's size to fit the displayed image (see \cross{ShowImage}), and the user can not change the window size manually.}
\cvarg{CV\_WINDOW\_FREERATIO or CV\_WINDOW\_KEEPRATIO:}
{\texttt{CV\_WINDOW\_FREERATIO} adjust the image without respect the its ration, whereas \texttt{CV\_WINDOW\_KEEPRATIO} keep the image's ratio.}
\cvarg{CV\_GUI\_NORMAL or CV\_GUI\_EXPANDED:}
{\texttt{CV\_GUI\_NORMAL} is the old way to draw the window without statusbar and toolbar, whereas \texttt{CV\_GUI\_EXPANDED} is the new enhance GUI.}
\end{description}
This parameter is optional. The default flags set for a new window are \texttt{CV\_WINDOW\_AUTOSIZE}, \texttt{CV\_WINDOW\_KEEPRATIO}, and \texttt{CV\_GUI\_EXPANDED}.
However, if you want to modify the flags, you can combine them using OR operator, ie:
\cvarg{delay}{Delay in milliseconds. 0 is the special value that means "forever"}
\end{description}
The function \texttt{waitKey} waits for key event infinitely (when $\texttt{delay}\leq0$) or for \texttt{delay} milliseconds, when it's positive. Returns the code of the pressed key or -1 if no key was pressed before the specified time had elapsed.
\textbf{Note:} This function is the only method in HighGUI that can fetch and handle events, so it needs to be called periodically for normal event processing, unless HighGUI is used within some environment that takes care of event processing.
\textbf{Note 2:} The function only works if there is at least one HighGUI window created and the window is active. If there are several HighGUI windows, any of them can be active.
\section{Reading and Writing Images and Video}
\cvCppFunc{imdecode}
Reads an image from a buffer in memory.
\cvdefCpp{Mat imdecode( const Mat\& buf, \par int flags );}
\begin{description}
\cvarg{buf}{The input array of vector of bytes}
\cvarg{flags}{The same flags as in \cross{imread}}
\end{description}
The function reads image from the specified buffer in memory.
If the buffer is too short or contains invalid data, the empty matrix will be returned.
See \cross{imread} for the list of supported formats and the flags description.
\cvCppFunc{imencode}
Encode an image into a memory buffer.
\cvdefCpp{bool imencode( const string\& ext,\par
const Mat\& img,\par
vector<uchar>\& buf,\par
const vector<int>\& params=vector<int>());}
\begin{description}
\cvarg{ext}{The file extension that defines the output format}
\cvarg{img}{The image to be written}
\cvarg{buf}{The output buffer; resized to fit the compressed image}
\cvarg{params}{The format-specific parameters; see \cross{imwrite}}
\end{description}
The function compresses the image and stores it in the memory buffer, which is resized to fit the result.
See \cross{imwrite} for the list of supported formats and the flags description.
\cvCppFunc{imread}
Loads an image from a file.
\cvdefCpp{Mat imread( const string\& filename, \par int flags=1 );}
\begin{description}
\cvarg{filename}{Name of file to be loaded.}
\cvarg{flags}{Specifies color type of the loaded image:}
\begin{description}
\cvarg{>0}{the loaded image is forced to be a 3-channel color image}
\cvarg{=0}{the loaded image is forced to be grayscale}
\cvarg{<0}{the loaded image will be loaded as-is (note that in the current implementation the alpha channel, if any, is stripped from the output image, e.g. 4-channel RGBA image will be loaded as RGB if $flags\ge0$).}
\end{description}
\end{description}
The function \texttt{imread} loads an image from the specified file and returns it. If the image can not be read (because of missing file, improper permissions, unsupported or invalid format), the function returns empty matrix (\texttt{Mat::data==NULL}).Currently, the following file formats are supported:
\begin{itemize}
\item Windows bitmaps - \texttt{*.bmp, *.dib} (always supported)
\item JPEG files - \texttt{*.jpeg, *.jpg, *.jpe} (see \textbf{Note2})
\item JPEG 2000 files - \texttt{*.jp2} (see \textbf{Note2})
\item Portable Network Graphics - \texttt{*.png} (see \textbf{Note2})
\item Portable image format - \texttt{*.pbm, *.pgm, *.ppm} (always supported)
\item Sun rasters - \texttt{*.sr, *.ras} (always supported)
\item TIFF files - \texttt{*.tiff, *.tif} (see \textbf{Note2})
\end{itemize}
\textbf{Note1}: The function determines type of the image by the content, not by the file extension.
\textbf{Note2}: On Windows and MacOSX the shipped with OpenCV image codecs (libjpeg, libpng, libtiff and libjasper) are used by default; so OpenCV can always read JPEGs, PNGs and TIFFs. On MacOSX there is also the option to use native MacOSX image readers. But beware that currently these native image loaders give images with somewhat different pixel values, because of the embedded into MacOSX color management.
On Linux, BSD flavors and other Unix-like open-source operating systems OpenCV looks for the supplied with OS image codecs. Please, install the relevant packages (do not forget the development files, e.g. "libjpeg-dev" etc. in Debian and Ubuntu) in order to get the codec support, or turn on \texttt{OPENCV\_BUILD\_3RDPARTY\_LIBS} flag in CMake.
\cvarg{params}{The format-specific save parameters, encoded as pairs \texttt{paramId\_1, paramValue\_1, paramId\_2, paramValue\_2, ...}. The following parameters are currently supported:
\begin{itemize}
\item In the case of JPEG it can be a quality (\texttt{CV\_IMWRITE\_JPEG\_QUALITY}), from 0 to 100 (the higher is the better), 95 by default.
\item In the case of PNG it can be the compression level (\texttt{CV\_IMWRITE\_PNG\_COMPRESSION}), from 0 to 9 (the higher value means smaller size and longer compression time), 3 by default.
\item In the case of PPM, PGM or PBM it can a binary format flag (\texttt{CV\_IMWRITE\_PXM\_BINARY}), 0 or 1, 1 by default.
\end{itemize}
}
\end{description}
The function \texttt{imwrite} saves the image to the specified file. The image format is chosen based on the \texttt{filename} extension, see \cross{imread} for the list of extensions. Only 8-bit (or 16-bit in the case of PNG, JPEG 2000 and TIFF) single-channel or 3-channel (with 'BGR' channel order) images can be saved using this function. If the format, depth or channel order is different, use \cross{Mat::convertTo}, and \cross{cvtColor} to convert it before saving, or use the universal XML I/O functions to save the image to XML or YAML format.
\cvclass{VideoCapture}
Class for video capturing from video files or cameras
\begin{lstlisting}
class VideoCapture
{
public:
// the default constructor
VideoCapture();
// the constructor that opens video file
VideoCapture(const string& filename);
// the constructor that starts streaming from the camera
VideoCapture(int device);
// the destructor
virtual ~VideoCapture();
// opens the specified video file
virtual bool open(const string& filename);
// starts streaming from the specified camera by its id
virtual bool open(int device);
// returns true if the file was open successfully or if the camera
// has been initialized succesfully
virtual bool isOpened() const;
// closes the camera stream or the video file
// (automatically called by the destructor)
virtual void release();
// grab the next frame or a set of frames from a multi-head camera;
// returns false if there are no more frames
virtual bool grab();
// reads the frame from the specified video stream
// (non-zero channel is only valid for multi-head camera live streams)
virtual bool retrieve(Mat& image, int channel=0);
// equivalent to grab() + retrieve(image, 0);
virtual VideoCapture& operator >> (Mat& image);
// sets the specified property propId to the specified value
virtual bool set(int propId, double value);
// retrieves value of the specified property
virtual double get(int propId);
protected:
...
};
\end{lstlisting}
The class provides C++ video capturing API. Here is how the class can be used:
\begin{lstlisting}
#include "cv.h"
#include "highgui.h"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat edges;
namedWindow("edges",1);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
cvtColor(frame, edges, CV_BGR2GRAY);
GaussianBlur(edges, edges, Size(7,7), 1.5, 1.5);
Canny(edges, edges, 0, 30, 3);
imshow("edges", edges);
if(waitKey(30) >= 0) break;
}
// the camera will be deinitialized automatically in VideoCapture destructor