:param objectPoints:Array of object points in the object coordinate space, 3xN/Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points. ``vector<Point3f>`` can be also passed here.
@ -587,7 +585,7 @@ solvePnPRansac
------------------
Finds an object pose from 3D-2D point correspondences using the RANSAC scheme.
:param objectPoints:Vector of vectors of the calibration pattern points in the calibration pattern coordinate space. In the old interface all the per-view vectors are concatenated. See :ocv:func:`calibrateCamera` for details.
@ -903,7 +901,8 @@ Projects 3D points to an image plane.
:param objectPoints:Array of object points, 3xN/Nx3 1-channel or 1xN/Nx1 3-channel (or ``vector<Point3f>`` ), where N is the number of points in the view.
@ -943,11 +942,11 @@ reprojectImageTo3D
----------------------
Reprojects a disparity image to 3D space.
..ocv:function:: void reprojectImageTo3D( InputArray disparity, OutputArray _3dImage, InputArray Q, bool handleMissingValues=false, int depth=-1 )
..ocv:function:: void reprojectImageTo3D( InputArray disparity, OutputArray _3dImage, InputArray Q, bool handleMissingValues=false, int ddepth=-1 )
..ocv:function:: StereoSGBM::StereoSGBM( int minDisparity, int numDisparities, int SADWindowSize, int P1=0, int P2=0, int disp12MaxDiff=0, int preFilterCap=0, int uniquenessRatio=0, int speckleWindowSize=0, int speckleRange=0, bool fullDP=false)
@ -136,10 +136,10 @@ Approximates an elliptic arc with a polyline.
:param center:Center of the arc.
:param axes:Half-sizes of the arc. See the :ocv:func:`ellipse` for details.
:param angle:Rotation angle of the ellipse in degrees. See the :ocv:func:`ellipse` for details.
:param axes:Half-sizes of the arc. See the :ocv:func:`ellipse` for details.
:param angle:Rotation angle of the ellipse in degrees. See the :ocv:func:`ellipse` for details.
:param startAngle:Starting angle of the elliptic arc in degrees.
:param endAngle:Ending angle of the elliptic arc in degrees.
@ -227,11 +227,11 @@ Calculates the width and height of a text string.
:param text:Input text string.
:param fontFace:Font to use. See the :ocv:func:`putText` for details.
:param fontFace:Font to use. See the :ocv:func:`putText` for details.
:param fontScale:Font scale. See the :ocv:func:`putText` for details.
:param fontScale:Font scale. See the :ocv:func:`putText` for details.
:param thickness:Thickness of lines used to render the text. See :ocv:func:`putText` for details.
:param thickness:Thickness of lines used to render the text. See :ocv:func:`putText` for details.
:param baseLine:Output parameter - y-coordinate of the baseline relative to the bottom-most text point.
@ -275,49 +275,49 @@ Initializes font structure (OpenCV 1.x API).
..ocv:cfunction:: void cvInitFont( CvFont* font, int fontFace, double hscale, double vscale, double shear=0, int thickness=1, int lineType=8 )
:param font:Pointer to the font structure initialized by the function
:param font:Pointer to the font structure initialized by the function
:param fontFace:Font name identifier. Only a subset of Hershey fonts http://sources.isc.org/utils/misc/hershey-font.txt are supported now:
* **CV_FONT_HERSHEY_SIMPLEX** normal size sans-serif font
* **CV_FONT_HERSHEY_SIMPLEX** normal size sans-serif font
* **CV_FONT_HERSHEY_PLAIN** small size sans-serif font
* **CV_FONT_HERSHEY_PLAIN** small size sans-serif font
* **CV_FONT_HERSHEY_DUPLEX** normal size sans-serif font (more complex than ``CV_FONT_HERSHEY_SIMPLEX`` )
* **CV_FONT_HERSHEY_DUPLEX** normal size sans-serif font (more complex than ``CV_FONT_HERSHEY_SIMPLEX`` )
* **CV_FONT_HERSHEY_COMPLEX** normal size serif font
* **CV_FONT_HERSHEY_COMPLEX** normal size serif font
* **CV_FONT_HERSHEY_TRIPLEX** normal size serif font (more complex than ``CV_FONT_HERSHEY_COMPLEX`` )
* **CV_FONT_HERSHEY_TRIPLEX** normal size serif font (more complex than ``CV_FONT_HERSHEY_COMPLEX`` )
* **CV_FONT_HERSHEY_COMPLEX_SMALL** smaller version of ``CV_FONT_HERSHEY_COMPLEX``
* **CV_FONT_HERSHEY_COMPLEX_SMALL** smaller version of ``CV_FONT_HERSHEY_COMPLEX``
* **CV_FONT_HERSHEY_SCRIPT_SIMPLEX** hand-writing style font
* **CV_FONT_HERSHEY_SCRIPT_SIMPLEX** hand-writing style font
* **CV_FONT_HERSHEY_SCRIPT_COMPLEX** more complex variant of ``CV_FONT_HERSHEY_SCRIPT_SIMPLEX``
* **CV_FONT_HERSHEY_SCRIPT_COMPLEX** more complex variant of ``CV_FONT_HERSHEY_SCRIPT_SIMPLEX``
The parameter can be composited from one of the values above and an optional ``CV_FONT_ITALIC`` flag, which indicates italic or oblique font.
The parameter can be composited from one of the values above and an optional ``CV_FONT_ITALIC`` flag, which indicates italic or oblique font.
:param hscale:Horizontal scale. If equal to ``1.0f`` , the characters have the original width depending on the font type. If equal to ``0.5f`` , the characters are of half the original width.
:param hscale:Horizontal scale. If equal to ``1.0f`` , the characters have the original width depending on the font type. If equal to ``0.5f`` , the characters are of half the original width.
:param vscale:Vertical scale. If equal to ``1.0f`` , the characters have the original height depending on the font type. If equal to ``0.5f`` , the characters are of half the original height.
:param vscale:Vertical scale. If equal to ``1.0f`` , the characters have the original height depending on the font type. If equal to ``0.5f`` , the characters are of half the original height.
:param shear:Approximate tangent of the character slope relative to the vertical line. A zero value means a non-italic font, ``1.0f`` means about a 45 degree slope, etc.
:param shear:Approximate tangent of the character slope relative to the vertical line. A zero value means a non-italic font, ``1.0f`` means about a 45 degree slope, etc.
:param thickness:Thickness of the text strokes
:param thickness:Thickness of the text strokes
:param lineType:Type of the strokes, see :ocv:func:`line` description
:param lineType:Type of the strokes, see :ocv:func:`line` description
The function initializes the font structure that can be passed to text rendering functions.
..seealso:::ocv:cfunc:`PutText`
.._Line:
.._Line:
line
--------
@ -416,7 +416,7 @@ Draws a simple, thick, or filled up-right rectangle.
:param pt1:Vertex of the rectangle.
:param pt2:Vertex of the rectangle opposite to ``pt1`` .
:param r:Alternative specification of the drawn rectangle.
:param color:Rectangle color or brightness (grayscale image).
@ -441,7 +441,7 @@ Draws several polygonal curves.
..ocv:cfunction:: void cvPolyLine( CvArr* img, CvPoint** pts, int* npts, int contours, int isClosed, CvScalar color, int thickness=1, int lineType=8, int shift=0 )
:param dst:Destination array that has the same size and type as ``src1`` (or ``src2``).
The function ``absdiff`` computes:
*
@ -91,11 +91,11 @@ Computes the per-element sum of two arrays or an array and a scalar.
:param src1:First source array or a scalar.
:param src2:Second source array or a scalar.
:param dst:Destination array that has the same size and number of channels as the input array(s). The depth is defined by ``dtype`` or ``src1``/``src2``.
:param mask:Optional operation mask, 8-bit single channel array, that specifies elements of the destination array to be changed.
:param dtype:Optional depth of the output array. See the discussion below.
The function ``add`` computes:
@ -133,7 +133,7 @@ The input arrays and the destination array can all have the same or different de
..note:: Saturation is not applied when the output array has the depth ``CV_32S``. You may even get result of an incorrect sign in the case of overflow.
..seealso::
:ocv:func:`subtract`,
:ocv:func:`addWeighted`,
:ocv:func:`scaleAdd`,
@ -158,13 +158,13 @@ Computes the weighted sum of two arrays.
:param alpha:Weight for the first array elements.
:param src2:Second source array of the same size and channel number as ``src1`` .
:param beta:Weight for the second array elements.
:param dst:Destination array that has the same size and number of channels as the input arrays.
:param gamma:Scalar added to each sum.
:param dtype:Optional depth of the destination array. When both input arrays have the same depth, ``dtype`` can be set to ``-1``, which will be equivalent to ``src1.depth()``.
The function ``addWeighted`` calculates the weighted sum of two arrays as follows:
@ -209,7 +209,7 @@ Calculates the per-element bit-wise conjunction of two arrays or an array and a
:param src2:Second source array or a scalar.
:param dst:Destination array that has the same size and type as the input array(s).
:param mask:Optional operation mask, 8-bit single channel array, that specifies elements of the destination array to be changed.
The function computes the per-element bit-wise logical conjunction for:
@ -254,7 +254,7 @@ Inverts every bit of an array.
:param src:Source array.
:param dst:Destination array that has the same size and type as the input array.
:param mask:Optional operation mask, 8-bit single channel array, that specifies elements of the destination array to be changed.
The function computes per-element bit-wise inversion of the source array:
@ -313,7 +313,7 @@ The function computes the per-element bit-wise logical disjunction for:
In case of floating-point arrays, their machine-specific bit representations (usually IEEE754-compliant) are used for the operation. In case of multi-channel arrays, each channel is processed independently. In the second and third cases above, the scalar is first converted to the array type.
bitwise_xor
-----------
@ -361,7 +361,7 @@ The function computes the per-element bit-wise logical "exclusive-or" operation
In case of floating-point arrays, their machine-specific bit representations (usually IEEE754-compliant) are used for the operation. In case of multi-channel arrays, each channel is processed independently. In the 2nd and 3rd cases above, the scalar is first converted to the array type.
calcCovarMatrix
---------------
@ -391,7 +391,7 @@ Calculates the covariance matrix of a set of vectors.
The covariance matrix will be ``nsamples x nsamples``. Such an unusual covariance matrix is used for fast PCA of a set of very large vectors (see, for example, the EigenFaces technique for face recognition). Eigenvalues of this "scrambled" matrix match the eigenvalues of the true covariance matrix. The "true" eigenvectors can be easily calculated from the eigenvectors of the "scrambled" covariance matrix.
* **CV_COVAR_NORMAL** The output covariance matrix is calculated as:
@ -399,7 +399,7 @@ Calculates the covariance matrix of a set of vectors.
``covar`` will be a square matrix of the same size as the total number of elements in each input vector. One and only one of ``CV_COVAR_SCRAMBLED`` and ``CV_COVAR_NORMAL`` must be specified.
* **CV_COVAR_USE_AVG** If the flag is specified, the function does not calculate ``mean`` from the input vectors but, instead, uses the passed ``mean`` vector. This is useful if ``mean`` has been pre-computed or known in advance, or if the covariance matrix is calculated by parts. In this case, ``mean`` is not a mean vector of the input sub-set of vectors but rather the mean vector of the whole set.
@ -434,9 +434,9 @@ Calculates the magnitude and angle of 2D vectors.
:param x:Array of x-coordinates. This must be a single-precision or double-precision floating-point array.
:param y:Array of y-coordinates that must have the same size and same type as ``x`` .
:param magnitude:Destination array of magnitudes of the same size and type as ``x`` .
:param angle:Destination array of angles that has the same size and type as ``x`` . The angles are measured in radians (from 0 to 2*Pi) or in degrees (0 to 360 degrees).
:param angleInDegrees:Flag indicating whether the angles are measured in radians, which is the default mode, or in degrees.
@ -460,7 +460,7 @@ Checks every element of an input array for invalid values.
:param src1:First source array or a scalar (in the case of ``cvCmp``, ``cv.Cmp``, ``cvCmpS``, ``cv.CmpS`` it is always an array). When it is array, it must have a single channel.
:param src2:Second source array or a scalar (in the case of ``cvCmp`` and ``cv.Cmp`` it is always an array; in the case of ``cvCmpS``, ``cv.CmpS`` it is always a scalar). When it is array, it must have a single channel.
:param dst:Destination array that has the same size as the input array(s) and type= ``CV_8UC1`` .
:param cmpop:Flag specifying the relation between the elements to be checked.
* **CMP_EQ** ``src1`` equal to ``src2``.
* **CMP_GT** ``src1`` greater than ``src2``.
* **CMP_GE** ``src1`` greater than or equal to ``src2``.
* **CMP_LT** ``src1`` less than ``src2``.
* **CMP_LE** ``src1`` less than or equal to ``src2``.
* **CMP_LT** ``src1`` less than ``src2``.
* **CMP_LE** ``src1`` less than or equal to ``src2``.
When the comparison result is true, the corresponding element of destination array is set to 255.
When the comparison result is true, the corresponding element of destination array is set to 255.
The comparison operations can be replaced with the equivalent matrix expressions: ::
Mat dst1 = src1 >= src2;
@ -570,11 +570,11 @@ The function ``completeSymm`` copies the lower half of a square matrix to its an
*
:math:`\texttt{mtx}_{ij}=\texttt{mtx}_{ji}` for
:math:`i > j` if ``lowerToUpper=false``
*
:math:`\texttt{mtx}_{ij}=\texttt{mtx}_{ji}` for
:math:`i < j` if ``lowerToUpper=true``
..seealso::
:ocv:func:`flip`,
@ -661,7 +661,7 @@ Converts ``CvMat``, ``IplImage`` , or ``CvMatND`` to ``Mat``.
..ocv:function:: Mat cvarrToMat(const CvArr* src, bool copyData=false, bool allowND=true, int coiMode=0)
:param src:Source ``CvMat``, ``IplImage`` , or ``CvMatND`` .
:param copyData:When it is false (default value), no data is copied and only the new header is created. In this case, the original array should not be deallocated while the new matrix header is used. If the parameter is true, all the data is copied and you may deallocate the original array right after the conversion.
:param allowND:When it is true (default value), ``CvMatND`` is converted to 2-dimensional ``Mat``, if it is possible (see the discussion below). If it is not possible, or when the parameter is false, the function will report an error.
@ -715,7 +715,7 @@ The last parameter, ``coiMode`` , specifies how to deal with an image with COI s
:ocv:cfunc:`cvGetMat`,
:ocv:func:`extractImageCOI`,
:ocv:func:`insertImageCOI`,
:ocv:func:`mixChannels`
:ocv:func:`mixChannels`
dct
-------
@ -731,7 +731,7 @@ Performs a forward or inverse discrete Cosine transform of 1D or 2D array.
:param src:Source floating-point array.
:param dst:Destination array of the same size and type as ``src`` .
:param flags:Transformation flags as a combination of the following values:
* **DCT_INVERSE** performs an inverse 1D or 2D transform instead of the default forward transform.
@ -754,7 +754,7 @@ The function ``dct`` performs a forward or inverse discrete Cosine transform (DC
:math:`\alpha_0=1`, :math:`\alpha_j=2` for *j > 0*.
*
@ -798,7 +798,7 @@ The function chooses the mode of operation by looking at the flags and size of t
If none of the above is true, the function performs a 2D transform.
..note::
Currently ``dct`` supports even-size arrays (2, 4, 6 ...). For data analysis and approximation, you can pad the array when necessary.
Also, the function performance depends very much, and not monotonically, on the array size (see
@ -825,12 +825,12 @@ Performs a forward or inverse Discrete Fourier transform of a 1D or 2D floating-
:param src:Source array that could be real or complex.
:param dst:Destination array whose size and type depends on the ``flags`` .
:param flags:Transformation flags representing a combination of the following values:
* **DFT_INVERSE** performs an inverse 1D or 2D transform instead of the default forward transform.
* **DFT_SCALE** scales the result: divide it by the number of array elements. Normally, it is combined with ``DFT_INVERSE`` .
* **DFT_SCALE** scales the result: divide it by the number of array elements. Normally, it is combined with ``DFT_INVERSE`` .
* **DFT_ROWS** performs a forward or inverse transform of every individual row of the input matrix. This flag enables you to transform multiple vectors simultaneously and can be used to decrease the overhead (which is sometimes several times larger than the processing itself) to perform 3D and higher-dimensional transforms and so forth.
* **DFT_COMPLEX_OUTPUT** performs a forward transformation of 1D or 2D real array. The result, though being a complex array, has complex-conjugate symmetry (*CCS*, see the function description below for details). Such an array can be packed into a real array of the same size as input, which is the fastest option and which is what the function does by default. However, you may wish to get a full complex array (for simpler spectrum analysis, and so on). Pass the flag to enable the function to produce a full-size complex output array.
@ -852,7 +852,7 @@ The function performs one of the following:
where
:math:`F^{(N)}_{jk}=\exp(-2\pi i j k/N)` and
:math:`i=\sqrt{-1}`
*
Inverse the Fourier transform of a 1D vector of ``N`` elements:
@ -863,7 +863,7 @@ The function performs one of the following:
:param src2:Second source array of the same size and type as ``src1`` .
:param scale:Scalar factor.
:param dst:Destination array of the same size and type as ``src2`` .
:param dtype:Optional depth of the destination array. If it is ``-1``, ``dst`` will have depth ``src2.depth()``. In case of an array-by-array division, you can only pass ``-1`` when ``src1.depth()==src2.depth()``.
The functions ``divide`` divide one array by another:
..math::
@ -1022,7 +1022,7 @@ Returns the determinant of a square floating-point matrix.
..ocv:pyfunction:: cv2.determinant(mtx) -> retval
..ocv:cfunction:: double cvDet(const CvArr* mtx)
..ocv:pyoldfunction:: cv.Det(mtx)-> double
..ocv:pyoldfunction:: cv.Det(mat) -> float
:param mtx:Input matrix that must have ``CV_32FC1`` or ``CV_64FC1`` type and square size.
@ -1043,21 +1043,20 @@ For symmetric positively-determined matrices, it is also possible to use :ocv:fu
eigen
-----
Computes eigenvalues and eigenvectors of a symmetric matrix.
..ocv:function:: bool eigen(InputArray src, OutputArray eigenvalues, int lowindex=-1, int highindex=-1)
:param src:Input matrix that must have ``CV_32FC1`` or ``CV_64FC1`` type, square size and be symmetrical (``src`` :sup:`T` == ``src``).
:param eigenvalues:Output vector of eigenvalues of the same type as ``src`` . The eigenvalues are stored in the descending order.
:param eigenvectors:Output matrix of eigenvectors. It has the same size and type as ``src`` . The eigenvectors are stored as subsequent matrix rows, in the same order as the corresponding eigenvalues.
@ -1110,9 +1109,9 @@ Extracts the selected image channel.
..ocv:function:: void extractImageCOI(const CvArr* src, OutputArray dst, int coi=-1)
:param src:Source array. It should be a pointer to ``CvMat`` or ``IplImage`` .
:param dst:Destination array with a single channel and the same size and depth as ``src`` .
:param coi:If the parameter is ``>=0`` , it specifies the channel to extract. If it is ``<0`` and ``src`` is a pointer to ``IplImage`` with a valid COI set, the selected COI is extracted.
The function ``extractImageCOI`` is used to extract an image COI from an old-style array and put the result to the new-style C++ matrix. As usual, the destination matrix is reallocated using ``Mat::create`` if needed.
@ -1170,7 +1169,7 @@ Flips a 2D array around vertical, horizontal, or both axes.
:param src:Source array.
:param dst:Destination array of the same size and type as ``src`` .
:param flipCode:Flag to specify how to flip the array. 0 means flipping around the x-axis. Positive value (for example, 1) means flipping around y-axis. Negative value (for example, -1) means flipping around both axes. See the discussion below for the formulas.
The function ``flip`` flips the array in one of three different ways (row and column indices are 0-based):
The function performs generalized matrix multiplication similar to the ``gemm`` functions in BLAS level 3. For example, ``gemm(src1, src2, alpha, src3, beta, dst, GEMM_1_T + GEMM_3_T)`` corresponds to
..math::
@ -1319,9 +1318,9 @@ Computes the inverse Discrete Cosine Transform of a 1D or 2D array.
:param dst:Destination array of the same size and type as ``src`` .
:param flags:Operation flags.
``idct(src, dst, flags)`` is equivalent to ``dct(src, dst, flags | DCT_INVERSE)``.
..seealso::
@ -1344,11 +1343,11 @@ Computes the inverse Discrete Fourier Transform of a 1D or 2D array.
:param src:Source floating-point real or complex array.
:param dst:Destination array whose size and type depend on the ``flags`` .
:param flags:Operation flags. See :ocv:func:`dft` .
:param nonzeroRows:Number of ``dst`` rows to compute. The rest of the rows have undefined content. See the convolution sample in :ocv:func:`dft` description.
``idft(src, dst, flags)`` is equivalent to ``dft(src, dst, flags | DFT_INVERSE)`` .
See :ocv:func:`dft` for details.
@ -1381,9 +1380,9 @@ Checks if array elements lie between the elements of two other arrays.
:param src:First source array.
:param lowerb:Inclusive lower boundary array or a scalar.
:param upperb:Inclusive upper boundary array or a scalar.
:param dst:Destination array of the same size as ``src`` and ``CV_8U`` type.
The function checks the range as follows:
@ -1417,12 +1416,12 @@ Finds the inverse or pseudo-inverse of a matrix.
..ocv:cfunction:: double cvInvert(const CvArr* src, CvArr* dst, int flags=CV_LU)
:param src:Source floating-point ``M x N`` matrix.
:param dst:Destination matrix of ``N x M`` size and the same type as ``src`` .
:param flags:Inversion method :
* **DECOMP_LU** Gaussian elimination with the optimal pivot element chosen.
@ -1461,7 +1460,7 @@ Calculates the natural logarithm of every array element.
:param src:Source array.
:param dst:Destination array of the same size and type as ``src`` .
The function ``log`` calculates the natural logarithm of the absolute value of every element of the input array:
..math::
@ -1499,7 +1498,7 @@ Performs a look-up table transform of an array.
:param lut:Look-up table of 256 elements. In case of multi-channel source array, the table should either have a single channel (in this case the same table is used for all channels) or the same number of channels as in the source array.
:param dst:Destination array of the same size and the same number of channels as ``src`` , and the same depth as ``lut`` .
The function ``LUT`` fills the destination array with values from the look-up table. Indices of the entries are taken from the source array. That is, the function processes each element of ``src`` as follows:
..math::
@ -1530,9 +1529,9 @@ Calculates the magnitude of 2D vectors.
:param x:Floating-point array of x-coordinates of the vectors.
:param y:Floating-point array of y-coordinates of the vectors. It must have the same size as ``x`` .
:param magnitude:Destination array of the same size and type as ``x`` .
The function ``magnitude`` calculates the magnitude of 2D vectors formed from the corresponding elements of ``x`` and ``y`` arrays:
..math::
@ -1558,7 +1557,7 @@ Calculates the Mahalanobis distance between two vectors.
:param src:Source array that should have from 1 to 4 channels so that the results can be stored in :ocv:class:`Scalar_` 's.
@ -1765,11 +1764,11 @@ Calculates per-element minimum of two arrays or array and a scalar.
:param src1:First source array.
:param src2:Second source array of the same size and type as ``src1`` .
:param value:Real scalar value.
:param dst:Destination array of the same size and type as ``src1`` .
The functions ``min`` compute the per-element minimum of two arrays:
..math::
@ -1807,15 +1806,15 @@ Finds the global minimum and maximum in an array
:param minVal:Pointer to the returned minimum value. ``NULL`` is used if not required.
:param maxVal:Pointer to the returned maximum value. ``NULL`` is used if not required.
:param minIdx:Pointer to the returned minimum location (in nD case). ``NULL`` is used if not required. Otherwise, it must point to an array of ``src.dims`` elements. The coordinates of the minimum element in each dimension are stored there sequentially.
..note::
When ``minIdx`` is not NULL, it must have at least 2 elements (as well as ``maxIdx``), even if ``src`` is a single-row or single-column matrix. In OpenCV (following MATLAB) each array has at least 2 dimensions, i.e. single-row matrix is ``Mx1`` matrix (and therefore ``minIdx``/``maxIdx`` will be ``(i1,0)``/``(i2,0)``) and single-column matrix is ``1xN`` matrix (and therefore ``minIdx``/``maxIdx`` will be ``(0,j1)``/``(0,j2)``).
:param maxIdx:Pointer to the returned maximum location (in nD case). ``NULL`` is used if not required.
The function ``minMaxIdx`` finds the minimum and maximum element values and their positions. The extremums are searched across the whole array or, if ``mask`` is not an empty array, in the specified array region.
The function does not work with multi-channel arrays. If you need to find minimum or maximum elements across all the channels, use
@ -1871,7 +1870,7 @@ The functions do not work with multi-channel arrays. If you need to find minimum
:ocv:func:`extractImageCOI`,
:ocv:func:`mixChannels`,
:ocv:func:`split`,
:ocv:func:`Mat::reshape`
:ocv:func:`Mat::reshape`
@ -1891,17 +1890,17 @@ Copies specified channels from input arrays to the specified channels of output
:param src:Input array or vector of matrices. All the matrices must have the same size and the same depth.
:param nsrc:Number of matrices in ``src`` .
:param dst:Output array or vector of matrices. All the matrices *must be allocated* . Their size and depth must be the same as in ``src[0]`` .
:param ndst:Number of matrices in ``dst`` .
:param fromTo:Array of index pairs specifying which channels are copied and where. ``fromTo[k*2]`` is a 0-based index of the input channel in ``src`` . ``fromTo[k*2+1]`` is an index of the output channel in ``dst`` . The continuous channel numbering is used: the first input image channels are indexed from ``0`` to ``src[0].channels()-1`` , the second input image channels are indexed from ``src[0].channels()`` to ``src[0].channels() + src[1].channels()-1``, and so on. The same scheme is used for the output image channels. As a special case, when ``fromTo[k*2]`` is negative, the corresponding output channel is filled with zero .
:param npairs:Number of index pairs in ``fromTo``.
The functions ``mixChannels`` provide an advanced mechanism for shuffling image channels.
:ocv:func:`split` and
:ocv:func:`merge` and some forms of
:ocv:func:`cvtColor` are partial cases of ``mixChannels`` .
@ -1945,9 +1944,9 @@ Performs the per-element multiplication of two Fourier spectrums.
:param src1:First source array.
:param src2:Second source array of the same size and type as ``src1`` .
:param dst:Destination array of the same size and type as ``src1`` .
:param flags:Operation flags. Currently, the only supported flag is ``DFT_ROWS``, which indicates that each row of ``src1`` and ``src2`` is an independent 1D Fourier spectrum.
:param conj:Optional flag that conjugates the second source array before the multiplication (true) or not (false).
@ -1970,14 +1969,14 @@ Calculates the per-element scaled product of two arrays.
:param src:Source single-channel matrix. Note that unlike :ocv:func:`gemm`, the function can multiply not only floating-point matrices.
@ -2032,7 +2031,7 @@ Calculates the product of a matrix and its transposition.
:param scale:Optional scale factor for the matrix product.
:param rtype:Optional type of the destination matrix. When it is negative, the destination matrix will have the same type as ``src`` . Otherwise, it will be ``type=CV_MAT_DEPTH(rtype)`` that should be either ``CV_32F`` or ``CV_64F`` .
The function ``mulTransposed`` calculates the product of ``src`` and its transposition:
..math::
@ -2070,12 +2069,12 @@ Calculates an absolute array norm, an absolute difference norm, or a relative di
:param src2:Second source array of the same size and the same type as ``src1`` .
:param normType:Type of the norm. See the details below.
:param mask:Optional operation mask. It must have the same size as ``src1`` and ``CV_8UC1`` type.
@ -2125,7 +2124,7 @@ Normalizes the norm or value range of an array.
:param src:Source array.
:param dst:Destination array of the same size as ``src`` .
:param alpha:Norm value to normalize to or the lower range boundary in case of the range normalization.
:param beta:Upper range boundary in case of the range normalization. It is not used for the norm normalization.
@ -2133,7 +2132,7 @@ Normalizes the norm or value range of an array.
:param normType:Normalization type. See the details below.
:param rtype:When the parameter is negative, the destination array has the same type as ``src``. Otherwise, it has the same number of channels as ``src`` and the depth ``=CV_MAT_DEPTH(rtype)`` .
:param mask:Optional operation mask.
@ -2279,7 +2278,7 @@ Projects vector(s) to the principal component subspace.
..ocv:pyfunction:: cv2.PCAProject(vec, mean, eigenvectors[, result]) -> result
..ocv:pyfunction:: cv2.PCAProject(data, mean, eigenvectors[, result]) -> result
:param vec:Input vector(s). They must have the same dimensionality and the same layout as the input data used at PCA phase. That is, if ``CV_PCA_DATA_AS_ROW`` are specified, then ``vec.cols==data.cols`` (vector dimensionality) and ``vec.rows`` is the number of vectors to project. The same is true for the ``CV_PCA_DATA_AS_COL`` case.
@ -2297,7 +2296,7 @@ Reconstructs vectors from their PC projections.
..ocv:pyfunction:: cv2.PCABackProject(vec, mean, eigenvectors[, result]) -> result
..ocv:pyfunction:: cv2.PCABackProject(data, mean, eigenvectors[, result]) -> result
:param vec:Coordinates of the vectors in the principal component subspace. The layout and size are the same as of ``PCA::project`` output vectors.
@ -2322,7 +2321,7 @@ Performs the perspective matrix transformation of vectors.
:param src:Source two-channel or three-channel floating-point array. Each element is a 2D/3D vector to be transformed.
:param dst:Destination array of the same size and type as ``src`` .
:param mtx:``3x3`` or ``4x4`` floating-point transformation matrix.
The function ``perspectiveTransform`` transforms every element of ``src`` by treating it as a 2D or 3D vector, in the following way:
@ -2366,10 +2365,10 @@ Calculates the rotation angle of 2D vectors.
:param x:Source floating-point array of x-coordinates of 2D vectors.
:param y:Source array of y-coordinates of 2D vectors. It must have the same size and the same type as ``x`` .
:param y:Source array of y-coordinates of 2D vectors. It must have the same size and the same type as ``x`` .
:param angle:Destination array of vector angles. It has the same size and same type as ``x`` .
:param angleInDegrees:When it is true, the function computes the angle in degrees. Otherwise, they are measured in radians.
The function ``phase`` computes the rotation angle of each 2D vector that is formed from the corresponding elements of ``x`` and ``y`` :
@ -2393,13 +2392,13 @@ Computes x and y coordinates of 2D vectors from their magnitude and angle.
..ocv:pyoldfunction:: cv.PolarToCart(magnitude, angle, x, y, angleInDegrees=0)-> None
:param magnitude:Source floating-point array of magnitudes of 2D vectors. It can be an empty matrix ( ``=Mat()`` ). In this case, the function assumes that all the magnitudes are =1. If it is not empty, it must have the same size and type as ``angle`` .
:param angle:Source floating-point array of angles of 2D vectors.
:param x:Destination array of x-coordinates of 2D vectors. It has the same size and type as ``angle``.
:param y:Destination array of y-coordinates of 2D vectors. It has the same size and type as ``angle``.
:param angleInDegrees:When it is true, the input angles are measured in degrees. Otherwise. they are measured in radians.
The function ``polarToCart`` computes the Cartesian coordinates of each 2D vector represented by the corresponding elements of ``magnitude`` and ``angle`` :
@ -2602,11 +2601,11 @@ Fills arrays with random numbers.
:param mat:2D or N-dimensional matrix. Currently matrices with more than 4 channels are not supported by the methods. Use :ocv:func:`Mat::reshape` as a possible workaround.
:param distType:Distribution type, ``RNG::UNIFORM`` or ``RNG::NORMAL`` .
:param a:First distribution parameter. In case of the uniform distribution, this is an inclusive lower boundary. In case of the normal distribution, this is a mean value.
:param b:Second distribution parameter. In case of the uniform distribution, this is a non-inclusive upper boundary. In case of the normal distribution, this is a standard deviation (diagonal of the standard deviation matrix or the full standard deviation matrix).
:param saturateRange:Pre-saturation flag; for uniform distribution only. If it is true, the method will first convert a and b to the acceptable value range (according to the mat datatype) and then will generate uniformly distributed random numbers within the range ``[saturate(a), saturate(b))``. If ``saturateRange=false``, the method will generate uniformly distributed random numbers in the original range ``[a, b)`` and then will saturate them. It means, for example, that ``theRNG().fill(mat_8u, RNG::UNIFORM, -DBL_MAX, DBL_MAX)`` will likely produce array mostly filled with 0's and 255's, since the range ``(0, 255)`` is significantly smaller than ``[-DBL_MAX, DBL_MAX)``.
Each of the methods fills the matrix with the random values from the specified distribution. As the new numbers are generated, the RNG state is updated accordingly. In case of multiple-channel images, every channel is filled independently, which means that RNG cannot generate samples from the multi-dimensional Gaussian distribution with non-diagonal covariance matrix directly. To do that, the method generates samples from multi-dimensional standard Gaussian distribution with zero mean and identity covariation matrix, and then transforms them using :ocv:func:`transform` to get samples from the specified Gaussian distribution.
@ -2640,7 +2639,7 @@ The second non-template variant of the function fills the matrix ``mtx`` with un
:ocv:class:`RNG`,
:ocv:func:`randn`,
:ocv:func:`theRNG`
:ocv:func:`theRNG`
@ -2673,7 +2672,7 @@ Shuffles the array elements randomly.
@ -2718,7 +2717,7 @@ Reduces a matrix to a vector.
* **CV_REDUCE_MIN** The output is the minimum (column/row-wise) of all rows/columns of the matrix.
:param dtype:When it is negative, the destination vector will have the same type as the source matrix. Otherwise, its type will be ``CV_MAKE_TYPE(CV_MAT_DEPTH(dtype), mtx.channels())`` .
The function ``reduce`` reduces the matrix to a vector by treating the matrix rows/columns as a set of 1D vectors and performing the specified operation on the vectors until a single row/column is obtained. For example, the function can be used to compute horizontal and vertical projections of a raster image. In case of ``CV_REDUCE_SUM`` and ``CV_REDUCE_AVG`` , the output may have a larger element bit-depth to preserve accuracy. And multi-channel arrays are also supported in these two reduction modes.
..seealso:::ocv:func:`repeat`
@ -2741,7 +2740,7 @@ Fills the destination array with repeated copies of the source array.
:param src:Source array to replicate.
:param dst:Destination array of the same type as ``src`` .
:param ny:Flag to specify how many times the ``src`` is repeated along the vertical axis.
:param nx:Flag to specify how many times the ``src`` is repeated along the horizontal axis.
The second variant of the function is more convenient to use with
:ref:`MatrixExpressions` .
:ref:`MatrixExpressions` .
..seealso::
@ -2779,9 +2778,9 @@ Calculates the sum of a scaled array and another array.
:param scale:Scale factor for the first array.
:param src2:Second source array of the same size and type as ``src1`` .
:param dst:Destination array of the same size and type as ``src1`` .
The function ``scaleAdd`` is one of the classical primitive linear algebra operations, known as ``DAXPY`` or ``SAXPY`` in `BLAS <http://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms>`_. It calculates the sum of a scaled array and another array:
..math::
@ -2874,7 +2873,7 @@ Solves one or more linear systems or least-squares problems.
* **DECOMP_QR** QR factorization. The system can be over-defined and/or the matrix ``src1`` can be singular.
* **DECOMP_NORMAL** While all the previous flags are mutually exclusive, this flag can be used together with any of the previous. It means that the normal equations :math:`\texttt{src1}^T\cdot\texttt{src1}\cdot\texttt{dst}=\texttt{src1}^T\texttt{src2}` are solved instead of the original system :math:`\texttt{src1}\cdot\texttt{dst}=\texttt{src2}` .
The function ``solve`` solves a linear system or least-squares problem (the latter is possible with SVD or QR methods, or by specifying the flag ``DECOMP_NORMAL`` ):
..math::
@ -2960,7 +2959,7 @@ Sorts each row or each column of a matrix.
:param src:Source single-channel array.
:param dst:Destination array of the same size and type as ``src`` .
:param flags:Operation flags, a combination of the following values:
* **CV_SORT_EVERY_ROW** Each matrix row is sorted independently.
@ -2991,7 +2990,7 @@ Sorts each row or each column of a matrix.
:param src:Source single-channel array.
:param dst:Destination integer array of the same size as ``src`` .
:param flags:Operation flags that could be a combination of the following values:
* **CV_SORT_EVERY_ROW** Each matrix row is sorted independently.
@ -3026,7 +3025,7 @@ Divides a multi-channel array into several single-channel arrays.
@ -3066,7 +3065,7 @@ Calculates a square root of array elements.
:param src:Source floating-point array.
:param dst:Destination array of the same size and type as ``src`` .
The functions ``sqrt`` calculate a square root of each source array element. In case of multi-channel arrays, each channel is processed independently. The accuracy is approximately the same as of the built-in ``std::sqrt`` .
..seealso::
@ -3088,18 +3087,18 @@ Calculates the per-element difference between two arrays or array and a scalar.
:param vt:Transposed matrix of right singular values
:param flags:Operation flags - see :ocv:func:`SVD::SVD`.
The methods/functions perform SVD of matrix. Unlike ``SVD::SVD`` constructor and ``SVD::operator()``, they store the results to the user-provided matrices. ::
Mat A, w, u, vt;
SVD::compute(A, w, u, vt);
SVD::solveZ
-----------
@ -3254,7 +3253,7 @@ Solves an under-determined singular linear system.
:param dst:Found solution.
The method finds a unit-length solution ``x`` of a singular linear system
The method finds a unit-length solution ``x`` of a singular linear system
``A*x = 0``. Depending on the rank of ``A``, there can be no solutions, a single solution or an infinite number of solutions. In general, the algorithm solves the following problem:
..math::
@ -3274,18 +3273,18 @@ Performs a singular value back substitution.
..ocv:cfunction:: void cvSVBkSb( const CvArr* w, const CvArr* u, const CvArr* v, const CvArr* rhs, CvArr* dst, int flags)
..ocv:pyoldfunction:: cv.SVBkSb(w, u, v, rhs, dst, flags)-> None
..ocv:pyoldfunction:: cv.SVBkSb(W, U, V, B, X, flags) -> None
:param w:Singular values
:param u:Left singular vectors
:param v:Right singular vectors
:param vt:Transposed matrix of right singular vectors.
:param rhs:Right-hand side of a linear system ``(u*w*v')*dst = rhs`` to be solved, where ``A`` has been previously decomposed.
:param dst:Found solution of the system.
The method computes a back substitution for the specified right-hand side:
@ -3294,7 +3293,7 @@ The method computes a back substitution for the specified right-hand side:
Using this technique you can either get a very accurate solution of the convenient linear system, or the best (in the least-squares terms) pseudo-solution of an overdetermined linear system.
Using this technique you can either get a very accurate solution of the convenient linear system, or the best (in the least-squares terms) pseudo-solution of an overdetermined linear system.
..note:: Explicit SVD with the further back substitution only makes sense if you need to solve many linear systems with the same left-hand side (for example, ``src`` ). If all you need is to solve a single system (possibly with multiple ``rhs`` immediately available), simply call :ocv:func:`solve` add pass ``DECOMP_SVD`` there. It does absolutely the same thing.
@ -3306,10 +3305,10 @@ Calculates the sum of array elements.
@ -95,7 +95,7 @@ Computes the cube root of an argument.
..ocv:cfunction:: float cvCbrt(float val)
..ocv:pyoldfunction:: cv.Cbrt(val)-> float
..ocv:pyoldfunction:: cv.Cbrt(value)-> float
:param val:A function argument.
@ -151,7 +151,7 @@ Determines if the argument is Infinity.
..ocv:cfunction:: int cvIsInf(double value)
..ocv:pyoldfunction:: cv.IsInf(value)-> int
:param value:The input floating-point value
:param value:The input floating-point value
The function returns 1 if the argument is a plus or minus infinity (as defined by IEEE754 standard) and 0 otherwise.
@ -162,7 +162,7 @@ Determines if the argument is Not A Number.
..ocv:cfunction:: int cvIsNaN(double value)
..ocv:pyoldfunction:: cv.IsNaN(value)-> int
:param value:The input floating-point value
:param value:The input floating-point value
The function returns 1 if the argument is Not A Number (as defined by IEEE754 standard), 0 otherwise.
@ -186,8 +186,8 @@ Signals an error and raises an exception.
:param exc:Exception to throw.
:param status:Error code. Normally, it is a negative value. The list of pre-defined error codes can be found in ``cxerror.h`` .
:param status:Error code. Normally, it is a negative value. The list of pre-defined error codes can be found in ``cxerror.h`` .
:param err_msg:Text of the error message.
:param args:``printf`` -like formatted error message in parentheses.
@ -209,7 +209,7 @@ The macro ``CV_Error_`` can be used to construct an error message on-fly to incl
Exception
---------
..ocv:class:: Exception
..ocv:class:: Exception : public std::exception
Exception class passed to an error. ::
@ -261,7 +261,7 @@ Deallocates a memory buffer.
..ocv:cfunction:: void cvFree( void** pptr )
:param ptr:Pointer to the allocated buffer.
:param pptr:Double pointer to the allocated buffer
The function deallocates the buffer allocated with :ocv:func:`fastMalloc` . If NULL pointer is passed, the function does nothing. C version of the function clears the pointer ``*pptr`` to avoid problems with double memory deallocation.
@ -286,10 +286,10 @@ Returns true if the specified feature is supported by the host hardware.
@ -167,7 +167,7 @@ Finds the best match for each descriptor from a query set.
:param masks:Set of masks. Each ``masks[i]`` specifies permissible matches between the input query descriptors and stored train descriptors from the i-th image ``trainDescCollection[i]``.
In the first variant of this method, the train descriptors are passed as an input argument. In the second variant of the method, train descriptors collection that was set by ``DescriptorMatcher::add`` is used. Optional mask (or masks) can be passed to specify which query and training descriptors can be matched. Namely, ``queryDescriptors[i]`` can be matched with ``trainDescriptors[j]`` only if ``mask.at<uchar>(i,j)`` is non-zero.
In the first variant of this method, the train descriptors are passed as an input argument. In the second variant of the method, train descriptors collection that was set by ``DescriptorMatcher::add`` is used. Optional mask (or masks) can be passed to specify which query and training descriptors can be matched. Namely, ``queryDescriptors[i]`` can be matched with ``trainDescriptors[j]`` only if ``mask.at<uchar>(i,j)`` is non-zero.
@ -193,7 +193,7 @@ Finds the k best matches for each descriptor from a query set.
:param compactResult:Parameter used when the mask (or masks) is not empty. If ``compactResult`` is false, the ``matches`` vector has the same size as ``queryDescriptors`` rows. If ``compactResult`` is true, the ``matches`` vector does not contain matches for fully masked-out query descriptors.
These extended variants of :ocv:func:`DescriptorMatcher::match` methods find several best matches for each query descriptor. The matches are returned in the distance increasing order. See :ocv:func:`DescriptorMatcher::match` for the details about query and train descriptors.
These extended variants of :ocv:func:`DescriptorMatcher::match` methods find several best matches for each query descriptor. The matches are returned in the distance increasing order. See :ocv:func:`DescriptorMatcher::match` for the details about query and train descriptors.
@ -218,7 +218,7 @@ For each query descriptor, finds the training descriptors not farther than the s
:param compactResult:Parameter used when the mask (or masks) is not empty. If ``compactResult`` is false, the ``matches`` vector has the same size as ``queryDescriptors`` rows. If ``compactResult`` is true, the ``matches`` vector does not contain matches for fully masked-out query descriptors.
:param maxDistance:Threshold for the distance between matched descriptors.
For each query descriptor, the methods find such training descriptors that the distance between the query descriptor and the training descriptor is equal or smaller than ``maxDistance``. Found matches are returned in the distance increasing order.
:param emptyTrainData:If ``emptyTrainData`` is false, the method creates a deep copy of the object, that is, copies both parameters and train data. If ``emptyTrainData`` is true, the method creates an object copy with the current parameters but with empty train data.
@ -241,15 +241,15 @@ Creates a descriptor matcher of a given type with the default parameters (using
:param descriptorMatcherType:Descriptor matcher type. Now the following matcher types are supported:
*
*
``BruteForce`` (it uses ``L2`` )
*
*
``BruteForce-L1``
*
*
``BruteForce-Hamming``
*
*
``BruteForce-Hamming(2)``
*
*
``FlannBased``
@ -258,7 +258,7 @@ Creates a descriptor matcher of a given type with the default parameters (using
BFMatcher
-----------------
..ocv:class::BFMatcher
..ocv:class::BFMatcher : public DescriptorMatcher
Brute-force descriptor matcher. For each descriptor in the first set, this matcher finds the closest descriptor in the second set by trying each one. This descriptor matcher supports masking permissible matches of descriptor sets. ::
@ -267,16 +267,16 @@ BFMatcher::BFMatcher
--------------------
Brute-force matcher constructor.
..ocv:function:: BFMatcher::BFMatcher( int distanceType, bool crossCheck=false )
..ocv:function:: BFMatcher::BFMatcher( int normType, bool crossCheck=false )
:param distanceType:One of ``NORM_L1``, ``NORM_L2``, ``NORM_HAMMING``, ``NORM_HAMMING2``. ``L1`` and ``L2`` norms are preferable choices for SIFT and SURF descriptors, ``NORM_HAMMING`` should be used with ORB and BRIEF, ``NORM_HAMMING2`` should be used with ORB when ``WTA_K==3`` or ``4`` (see ORB::ORB constructor description).
:param crossCheck:If it is false, this is will be default BFMatcher behaviour when it finds the k nearest neighbors for each query descriptor. If ``crossCheck==true``, then the ``knnMatch()`` method with ``k=1`` will only return pairs ``(i,j)`` such that for ``i-th`` query descriptor the ``j-th`` descriptor in the matcher's collection is the nearest and vice versa, i.e. the ``BFMathcher`` will only return consistent pairs. Such technique usually produces best results with minimal number of outliers when there are enough matches. This is alternative to the ratio test, used by D. Lowe in SIFT paper.
FlannBasedMatcher
-----------------
..ocv:class:: FlannBasedMatcher
..ocv:class:: FlannBasedMatcher : public DescriptorMatcher
Flann-based descriptor matcher. This matcher trains :ocv:class:`flann::Index_` on a train descriptor collection and calls its nearest search methods to find the best matches. So, this matcher may be faster when matching a large train collection than the brute force matcher. ``FlannBasedMatcher`` does not support masking permissible matches of descriptor sets because ``flann::Index`` does not support this. ::
Abstract base class for 2D image feature detectors. ::
@ -156,7 +156,7 @@ for example: ``"GridFAST"``, ``"PyramidSTAR"`` .
FastFeatureDetector
-------------------
..ocv:class:: FastFeatureDetector
..ocv:class:: FastFeatureDetector : public FeatureDetector
Wrapping class for feature detection using the
:ocv:func:`FAST` method. ::
@ -252,7 +252,7 @@ Wrapping class for feature detection using the
DenseFeatureDetector
--------------------
..ocv:class:: DenseFeatureDetector
..ocv:class:: DenseFeatureDetector : public FeatureDetector
Class for generation of image features which are distributed densely and regularly over the image. ::
@ -279,7 +279,7 @@ The detector generates several levels (in the amount of ``featureScaleLevels``)
SimpleBlobDetector
-------------------
..ocv:class:: SimpleBlobDetector
..ocv:class:: SimpleBlobDetector : public FeatureDetector
Class for extracting blobs from an image. ::
@ -344,7 +344,7 @@ Default values of parameters are tuned to extract dark circular blobs.
GridAdaptedFeatureDetector
--------------------------
..ocv:class:: GridAdaptedFeatureDetector
..ocv:class:: GridAdaptedFeatureDetector : public FeatureDetector
Class adapting a detector to partition the source image into a grid and detect points in each cell. ::
@ -369,7 +369,7 @@ Class adapting a detector to partition the source image into a grid and detect p
PyramidAdaptedFeatureDetector
-----------------------------
..ocv:class:: PyramidAdaptedFeatureDetector
..ocv:class:: PyramidAdaptedFeatureDetector : public FeatureDetector
Class adapting a detector to detect points over multiple levels of a Gaussian pyramid. Consider using this class for detectors that are not inherently scaled. ::
@ -387,7 +387,7 @@ Class adapting a detector to detect points over multiple levels of a Gaussian py
DynamicAdaptedFeatureDetector
-----------------------------
..ocv:class:: DynamicAdaptedFeatureDetector
..ocv:class:: DynamicAdaptedFeatureDetector : public FeatureDetector
Adaptively adjusting detector that iteratively detects features until the desired number is found. ::
..ocv:function:: DynamicAdaptedFeatureDetector::DynamicAdaptedFeatureDetector( const Ptr<AdjusterAdapter>& adjuster, int min_features, int max_features, int max_iters )
..ocv:function:: DynamicAdaptedFeatureDetector::DynamicAdaptedFeatureDetector( const Ptr<AdjusterAdapter>& adjaster, int min_features=400, int max_features=500, int max_iters=5 )
:param adjuster::ocv:class:`AdjusterAdapter` that detects features and adjusts parameters.
@ -443,7 +443,7 @@ The constructor
AdjusterAdapter
---------------
..ocv:class:: AdjusterAdapter
..ocv:class:: AdjusterAdapter : public FeatureDetector
Class providing an interface for adjusting parameters of a feature detector. This interface is used by :ocv:class:`DynamicAdaptedFeatureDetector` . It is a wrapper for :ocv:class:`FeatureDetector` that enables adjusting parameters after feature detection. ::
@ -522,7 +522,7 @@ Creates an adjuster adapter by name
FastAdjuster
------------
..ocv:class:: FastAdjuster
..ocv:class:: FastAdjuster : public AdjusterAdapter
:ocv:class:`AdjusterAdapter` for :ocv:class:`FastFeatureDetector`. This class decreases or increases the threshold value by 1. ::
@ -535,7 +535,7 @@ FastAdjuster
StarAdjuster
------------
..ocv:class:: StarAdjuster
..ocv:class:: StarAdjuster : public AdjusterAdapter
:ocv:class:`AdjusterAdapter` for :ocv:class:`StarFeatureDetector`. This class adjusts the ``responseThreshhold`` of ``StarFeatureDetector``. ::
@ -3,7 +3,7 @@ Common Interfaces of Generic Descriptor Matchers
..highlight:: cpp
Matchers of keypoint descriptors in OpenCV have wrappers with a common interface that enables you to easily switch
Matchers of keypoint descriptors in OpenCV have wrappers with a common interface that enables you to easily switch
between different algorithms solving the same problem. This section is devoted to matching descriptors
that cannot be represented as vectors in a multidimensional space. ``GenericDescriptorMatcher`` is a more generic interface for descriptors. It does not make any assumptions about descriptor representation.
@ -151,12 +151,12 @@ Classifies keypoints from a query set.
:param trainKeypoints:Keypoints from a train image.
The method classifies each keypoint from a query set. The first variant of the method takes a train image and its keypoints as an input argument. The second variant uses the internally stored training collection that can be built using the ``GenericDescriptorMatcher::add`` method.
The methods do the following:
#.
Call the ``GenericDescriptorMatcher::match`` method to find correspondence between the query set and the training set.
#.
Set the ``class_id`` field of each keypoint from the query set to ``class_id`` of the corresponding keypoint from the training set.
@ -195,7 +195,7 @@ Finds the ``k`` best matches for each query keypoint.
The methods are extended variants of ``GenericDescriptorMatch::match``. The parameters are similar, and the semantics is similar to ``DescriptorMatcher::knnMatch``. But this class does not require explicitly computed keypoint descriptors.
@ -31,7 +31,7 @@ Draws the found matches of keypoints from two images.
:param matchesMask:Mask determining which matches are drawn. If the mask is empty, all matches are drawn.
:param flags:Flags setting drawing features. Possible ``flags`` bit values are defined by ``DrawMatchesFlags``.
This function draws matches of keypoints from two images in the output image. Match is a line connecting two keypoints (circles). The structure ``DrawMatchesFlags`` is defined as follows:
@ -24,7 +24,7 @@ Detects corners using the FAST algorithm by [Rosten06]_.
MSER
----
..ocv:class:: MSER
..ocv:class:: MSER : public FeatureDetector
Maximally stable extremal region extractor. ::
@ -50,7 +50,7 @@ http://en.wikipedia.org/wiki/Maximally_stable_extremal_regions). Also see http:/
ORB
---
..ocv:class:: ORB
..ocv:class:: ORB : public Feature2D
Class implementing the ORB (*oriented BRIEF*) keypoint detector and descriptor extractor, described in [RRKB11]_. The algorithm uses FAST in pyramids to detect stable keypoints, selects the strongest features using FAST or Harris response, finds their orientation using first-order moments and computes the descriptors using BRIEF (where the coordinates of random point pairs (or k-tuples) are rotated according to the measured orientation).
@ -60,39 +60,37 @@ ORB::ORB
--------
The ORB constructor
..ocv:function:: ORB::ORB()
..ocv:function:: ORB::ORB(int nfeatures = 500, float scaleFactor = 1.2f, int nlevels = 8, int edgeThreshold = 31, int firstLevel = 0, int WTA_K=2, int scoreType=HARRIS_SCORE, int patchSize=31)
:param nfeatures:The maximum number of features to retain.
:param scaleFactor:Pyramid decimation ratio, greater than 1. ``scaleFactor==2`` means the classical pyramid, where each next level has 4x less pixels than the previous, but such a big scale factor will degrade feature matching scores dramatically. On the other hand, too close to 1 scale factor will mean that to cover certain scale range you will need more pyramid levels and so the speed will suffer.
:param nlevels:The number of pyramid levels. The smallest level will have linear size equal to ``input_image_linear_size/pow(scaleFactor, nlevels)``.
:param edgeThreshold:This is size of the border where the features are not detected. It should roughly match the ``patchSize`` parameter.
:param firstLevel:It should be 0 in the current implementation.
:param WTA_K:The number of points that produce each element of the oriented BRIEF descriptor. The default value 2 means the BRIEF where we take a random point pair and compare their brightnesses, so we get 0/1 response. Other possible values are 3 and 4. For example, 3 means that we take 3 random points (of course, those point coordinates are random, but they are generated from the pre-defined seed, so each element of BRIEF descriptor is computed deterministically from the pixel rectangle), find point of maximum brightness and output index of the winner (0, 1 or 2). Such output will occupy 2 bits, and therefore it will need a special variant of Hamming distance, denoted as ``NORM_HAMMING2`` (2 bits per bin). When ``WTA_K=4``, we take 4 random points to compute each bin (that will also occupy 2 bits with possible values 0, 1, 2 or 3).
:param scoreType:The default HARRIS_SCORE means that Harris algorithm is used to rank features (the score is written to ``KeyPoint::score`` and is used to retain best ``nfeatures`` features); FAST_SCORE is alternative value of the parameter that produces slightly less stable keypoints, but it is a little faster to compute.
:param patchSize:size of the patch used by the oriented BRIEF descriptor. Of course, on smaller pyramid layers the perceived image area covered by a feature will be larger.
ORB::operator()
---------------
Finds keypoints in an image and computes their descriptors
@ -338,7 +338,7 @@ Blocks the current CPU thread until all operations in the stream are complete.
gpu::StreamAccessor
-------------------
..ocv:class:: gpu::StreamAccessor
..ocv:struct:: gpu::StreamAccessor
Class that enables getting ``cudaStream_t`` from :ocv:class:`gpu::Stream` and is declared in ``stream_accessor.hpp`` because it is the only public header that depends on the CUDA Runtime API. Including it brings a dependency to your code. ::
@ -129,11 +129,11 @@ The function ``imwrite`` saves the image to the specified file. The image format
:ocv:func:`cvtColor` to convert it before saving. Or, use the universal XML I/O functions to save the image to XML or YAML format.
It is possible to store PNG images with an alpha channel using this function. To do this, create 8-bit (or 16-bit) 4-channel image BGRA, where the alpha channel goes last. Fully transparent pixels should have alpha set to 0, fully opaque pixels should have alpha set to 255/65535. The sample below shows how to create such a BGRA image and store to PNG file. It also demonstrates how to set custom compression parameters ::
The methods/functions decode and return the just grabbed frame. If no frames has been grabbed (camera has been disconnected, or there are no more frames in video file), the methods return false and the functions return NULL pointer.
@ -322,11 +322,11 @@ Grabs, decodes and returns the next video frame.
The methods/functions combine :ocv:func:`VideoCapture::grab` and :ocv:func:`VideoCapture::retrieve` in one call. This is the most convenient method for reading video files or capturing data from decode and return the just grabbed frame. If no frames has been grabbed (camera has been disconnected, or there are no more frames in video file), the methods return false and the functions return NULL pointer.
@ -335,15 +335,15 @@ The methods/functions combine :ocv:func:`VideoCapture::grab` and :ocv:func:`Vide
:param fourcc:4-character code of codec used to compress the frames. For example, ``CV_FOURCC('P','I','M,'1')`` is a MPEG-1 codec, ``CV_FOURCC('M','J','P','G')`` is a motion-jpeg codec etc.
:param fps:Framerate of the created video stream.
:param fps:Framerate of the created video stream.
:param frameSize:Size of the video frames.
:param frameSize:Size of the video frames.
:param isColor:If it is not zero, the encoder will expect and encode color frames, otherwise it will work with grayscale frames (the flag is currently supported on Windows only).
:param isColor:If it is not zero, the encoder will expect and encode color frames, otherwise it will work with grayscale frames (the flag is currently supported on Windows only).
The constructors/functions initialize video writers. On Linux FFMPEG is used to write videos; on Windows FFMPEG or VFW is used; on MacOSX QTKit is used.
@ -526,9 +526,9 @@ Writes the next video frame
..ocv:cfunction:: int cvWriteFrame( CvVideoWriter* writer, const IplImage* image )
@ -27,7 +27,7 @@ Creates a trackbar and attaches it to the specified window.
The function ``createTrackbar`` creates a trackbar (a slider or range control) with the specified name and range, assigns a variable ``value`` to be a position synchronized with the trackbar and specifies the callback function ``onChange`` to be called on the trackbar position change. The created trackbar is displayed in the specified window ``winname``.
..note::
**[Qt Backend Only]**``winname`` can be empty (or NULL) if the trackbar should be attached to the control panel.
Clicking the label of each trackbar enables editing the trackbar values manually.
:param onMouse:Mouse callback. See OpenCV samples, such as http://code.opencv.org/svn/opencv/trunk/opencv/samples/cpp/ffilldemo.cpp, on how to specify and use the callback.
:param param:The optional parameter passed to the callback.
@ -233,5 +233,5 @@ The function ``waitKey`` waits for a key event infinitely (when
This function is the only method in HighGUI that can fetch and handle events, so it needs to be called periodically for normal event processing unless HighGUI is used within an environment that takes care of event processing.
..note::
The function only works if there is at least one HighGUI window created and the window is active. If there are several HighGUI windows, any of them can be active.
:param image:Input 8-bit or floating-point 32-bit, single-channel image.
:param eigImage:The parameter is ignored.
:param tempImage:The parameter is ignored.
:param corners:Output vector of detected corners.
@ -244,9 +247,9 @@ Determines strong corners on an image.
:param mask:Optional region of interest. If the image is not empty (it needs to have the type ``CV_8UC1`` and the same size as ``image`` ), it specifies the region in which the corners are detected.
:param blockSize:Size of an average block for computing a derivative covariation matrix over each pixel neighborhood. See :ocv:func:`cornerEigenValsAndVecs` .
:param useHarrisDetector:Parameter indicating whether to use a Harris detector (see :ocv:func:`cornerHarris`) or :ocv:func:`cornerMinEigenVal`.
:param k:Free parameter of the Harris detector.
The function finds the most prominent corners in the image or in the specified image region, as described in [Shi94]_:
@ -255,7 +258,7 @@ The function finds the most prominent corners in the image or in the specified i
Function calculates the corner quality measure at every source image pixel using the
:ocv:func:`cornerMinEigenVal` or
:ocv:func:`cornerHarris` .
#.
Function performs a non-maximum suppression (the local maximums in *3 x 3* neighborhood are retained).
@ -268,16 +271,16 @@ The function finds the most prominent corners in the image or in the specified i
#.
Function throws away each corner for which there is a stronger corner at a distance less than ``maxDistance``.
The function can be used to initialize a point-based tracker of an object.
..note:: If the function is called with different values ``A`` and ``B`` of the parameter ``qualityLevel`` , and ``A`` > {B}, the vector of returned corners with ``qualityLevel=A`` will be the prefix of the output vector with ``qualityLevel=B`` .
..seealso::
:ocv:func:`cornerMinEigenVal`,
:ocv:func:`cornerHarris`,
:ocv:func:`calcOpticalFlowPyrLK`,
:ocv:func:`cornerMinEigenVal`,
:ocv:func:`cornerHarris`,
:ocv:func:`calcOpticalFlowPyrLK`,
:ocv:func:`estimateRigidTransform`,
@ -287,16 +290,16 @@ Finds circles in a grayscale image using the Hough transform.
..ocv:function:: void HoughCircles( InputArray image, OutputArray circles, int method, double dp, double minDist, double param1=100, double param2=100, int minRadius=0, int maxRadius=0 )
..ocv:cfunction:: CvSeq* cvHoughCircles( CvArr* image, CvMemStorage* circleStorage, int method, double dp, double minDist, double param1=100, double param2=100, int minRadius=0, int maxRadius=0 )
..ocv:cfunction:: CvSeq* cvHoughCircles( CvArr* image, void* circle_storage, int method, double dp, double min_dist, double param1=100, double param2=100, int min_radius=0, int max_radius=0 )
:param circles:Output vector of found circles. Each vector is encoded as a 3-element floating-point vector :math:`(x, y, radius)` .
:param circleStorage:In C function this is a memory storage that will contain the output sequence of found circles.
:param method:Detection method to use. Currently, the only implemented method is ``CV_HOUGH_GRADIENT`` , which is basically *21HT* , described in [Yuen90]_.
:param dp:Inverse ratio of the accumulator resolution to the image resolution. For example, if ``dp=1`` , the accumulator has the same resolution as the input image. If ``dp=2`` , the accumulator has half as big width and height.
@ -311,7 +314,7 @@ Finds circles in a grayscale image using the Hough transform.
:param maxRadius:Maximum circle radius.
The function finds circles in a grayscale image using a modification of the Hough transform.
The function finds circles in a grayscale image using a modification of the Hough transform.
Example: ::
@ -362,7 +365,7 @@ Finds lines in a binary image using the standard Hough transform.
@ -379,31 +382,31 @@ Finds lines in a binary image using the standard Hough transform.
:param srn:For the multi-scale Hough transform, it is a divisor for the distance resolution ``rho`` . The coarse accumulator distance resolution is ``rho`` and the accurate accumulator resolution is ``rho/srn`` . If both ``srn=0`` and ``stn=0`` , the classical Hough transform is used. Otherwise, both these parameters should be positive.
:param stn:For the multi-scale Hough transform, it is a divisor for the distance resolution ``theta``.
:param method:One of the following Hough transform variants:
* **CV_HOUGH_STANDARD** classical or standard Hough transform. Every line is represented by two floating-point numbers :math:`(\rho, \theta)` , where :math:`\rho` is a distance between (0,0) point and the line, and :math:`\theta` is the angle between x-axis and the normal to the line. Thus, the matrix must be (the created sequence will be) of ``CV_32FC2`` type
* **CV_HOUGH_PROBABILISTIC** probabilistic Hough transform (more efficient in case if the picture contains a few long linear segments). It returns line segments rather than the whole line. Each segment is represented by starting and ending points, and the matrix must be (the created sequence will be) of the ``CV_32SC4`` type.
:param method:One of the following Hough transform variants:
* **CV_HOUGH_STANDARD** classical or standard Hough transform. Every line is represented by two floating-point numbers :math:`(\rho, \theta)` , where :math:`\rho` is a distance between (0,0) point and the line, and :math:`\theta` is the angle between x-axis and the normal to the line. Thus, the matrix must be (the created sequence will be) of ``CV_32FC2`` type
* **CV_HOUGH_PROBABILISTIC** probabilistic Hough transform (more efficient in case if the picture contains a few long linear segments). It returns line segments rather than the whole line. Each segment is represented by starting and ending points, and the matrix must be (the created sequence will be) of the ``CV_32SC4`` type.
* **CV_HOUGH_MULTI_SCALE** multi-scale variant of the classical Hough transform. The lines are encoded the same way as ``CV_HOUGH_STANDARD``.
:param param1:First method-dependent parameter:
* For the classical Hough transform, it is not used (0).
* For the probabilistic Hough transform, it is the minimum line length.
* For the multi-scale Hough transform, it is ``srn``.
:param param2:Second method-dependent parameter:
* For the multi-scale Hough transform, it is ``srn``.
:param param2:Second method-dependent parameter:
* For the classical Hough transform, it is not used (0).
* For the probabilistic Hough transform, it is the maximum gap between line segments lying on the same line to treat them as a single line segment (that is, to join them).
* For the multi-scale Hough transform, it is ``stn``.
The function implements the standard or standard multi-scale Hough transform algorithm for line detection. See http://homepages.inf.ed.ac.uk/rbf/HIPR2/hough.htm for a good explanation of Hough transform.
@ -501,21 +504,22 @@ preCornerDetect
---------------
Calculates a feature map for corner detection.
..ocv:function:: void preCornerDetect( InputArray src, OutputArray dst, int apertureSize, int borderType=BORDER_DEFAULT )
..ocv:function:: void preCornerDetect( InputArray src, OutputArray dst, int ksize, int borderType=BORDER_DEFAULT )
@ -14,11 +14,11 @@ OpenCV enables you to specify the extrapolation method. For details, see the fun
/*
Various border types, image boundaries are denoted with '|'
* BORDER_REPLICATE: aaaaaa|abcdefgh|hhhhhhh
* BORDER_REFLECT: fedcba|abcdefgh|hgfedcb
* BORDER_REFLECT_101: gfedcb|abcdefgh|gfedcba
* BORDER_WRAP: cdefgh|abcdefgh|abcdefg
* BORDER_WRAP: cdefgh|abcdefgh|abcdefg
* BORDER_CONSTANT: iiiiii|abcdefgh|iiiiiii with some specified 'i'
*/
@ -390,9 +390,9 @@ Applies the bilateral filter to an image.
:param src:Source 8-bit or floating-point, 1-channel or 3-channel image.
:param dst:Destination image of the same size and type as ``src`` .
:param d:Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from ``sigmaSpace`` .
:param sigmaColor:Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see ``sigmaSpace`` ) will be mixed together, resulting in larger areas of semi-equal color.
:param sigmaSpace:Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see ``sigmaColor`` ). When ``d>0`` , it specifies the neighborhood size regardless of ``sigmaSpace`` . Otherwise, ``d`` is proportional to ``sigmaSpace`` .
@ -421,7 +421,7 @@ Smoothes an image using the normalized box filter.
:param src:Source image. The image can have any number of channels, which are processed independently. The depth should be ``CV_8U``, ``CV_16U``, ``CV_16S``, ``CV_32F`` or ``CV_64F``.
:param dst:Destination image of the same size and type as ``src`` .
:param ksize:Smoothing kernel size.
:param anchor:Anchor point. The default value ``Point(-1,-1)`` means that the anchor is at the kernel center.
@ -441,7 +441,7 @@ The call ``blur(src, dst, ksize, anchor, borderType)`` is equivalent to ``boxFil
:ocv:func:`boxFilter`,
:ocv:func:`bilateralFilter`,
:ocv:func:`GaussianBlur`,
:ocv:func:`medianBlur`
:ocv:func:`medianBlur`
borderInterpolate
@ -453,7 +453,7 @@ Computes the source location of an extrapolated pixel.
:param p:0-based coordinate of the extrapolated pixel along one of the axes, likely <0 or >= ``len`` .
:param len:Length of the array along the corresponding axis.
:param borderType:Border type, one of the ``BORDER_*`` , except for ``BORDER_TRANSPARENT`` and ``BORDER_ISOLATED`` . When ``borderType==BORDER_CONSTANT`` , the function always returns -1, regardless of ``p`` and ``len`` .
@ -486,7 +486,7 @@ Smoothes an image using the box filter.
:param src:Source image.
:param dst:Destination image of the same size and type as ``src`` .
:param ksize:Smoothing kernel size.
:param anchor:Anchor point. The default value ``Point(-1,-1)`` means that the anchor is at the kernel center.
@ -515,7 +515,7 @@ Unnormalized box filter is useful for computing various integral characteristics
:ocv:func:`bilateralFilter`,
:ocv:func:`GaussianBlur`,
:ocv:func:`medianBlur`,
:ocv:func:`integral`
:ocv:func:`integral`
@ -523,7 +523,7 @@ buildPyramid
----------------
Constructs the Gaussian pyramid for an image.
..ocv:function:: void buildPyramid( InputArray src, OutputArrayOfArrays dst, int maxlevel )
..ocv:function:: void buildPyramid( InputArray src, OutputArrayOfArrays dst, int maxlevel, int borderType=BORDER_DEFAULT )
:param src:Source image. Check :ocv:func:`pyrDown` for the list of supported types.
@ -550,19 +550,19 @@ Forms a border around an image.
:param src:Source image.
:param dst:Destination image of the same type as ``src`` and the size ``Size(src.cols+left+right, src.rows+top+bottom)`` .
:param top:
:param bottom:
:param left:
:param right:Parameter specifying how many pixels in each direction from the source image rectangle to extrapolate. For example, ``top=1, bottom=1, left=1, right=1`` mean that 1 pixel-wide border needs to be built.
:param borderType:Border type. See :ocv:func:`borderInterpolate` for details.
:param value:Border value if ``borderType==BORDER_CONSTANT`` .
The function copies the source image into the middle of the destination image. The areas to the left, to the right, above and below the copied source image will be filled with extrapolated pixels. This is not what
:ocv:class:`FilterEngine` or filtering functions based on it do (they extrapolate pixels on-fly), but what other more complex functions, including your own, may do to simplify image boundary handling.
@ -605,17 +605,17 @@ Returns a box filter engine.
:param srcType:Source image type.
:param sumType:Intermediate horizontal sum type that must have as many channels as ``srcType`` .
:param dstType:Destination image type that must have as many channels as ``srcType`` .
:param ksize:Aperture size.
:param anchor:Anchor position with the kernel. Negative values mean that the anchor is at the kernel center.
:param normalize:Flag specifying whether the sums are normalized or not. See :ocv:func:`boxFilter` for details.
:param scale:Another way to specify normalization in lower-level ``getColumnSumFilter`` .
:param borderType:Border type to use. See :ocv:func:`borderInterpolate` .
The function is a convenience function that retrieves the horizontal sum primitive filter with
@ -631,7 +631,7 @@ The function itself is used by
:ocv:class:`FilterEngine`,
:ocv:func:`blur`,
:ocv:func:`boxFilter`
:ocv:func:`boxFilter`
@ -644,13 +644,13 @@ Returns an engine for computing image derivatives.
:param srcType:Source image type.
:param dstType:Destination image type that must have as many channels as ``srcType`` .
:param dx:Derivative order in respect of x.
:param dy:Derivative order in respect of y.
:param ksize:Aperture size See :ocv:func:`getDerivKernels` .
:param borderType:Border type to use. See :ocv:func:`borderInterpolate` .
The function :ocv:func:`createDerivFilter` is a small convenience function that retrieves linear filter coefficients for computing image derivatives using
@ -664,7 +664,7 @@ The function :ocv:func:`createDerivFilter` is a small convenience function that
:ocv:func:`createSeparableLinearFilter`,
:ocv:func:`getDerivKernels`,
:ocv:func:`Scharr`,
:ocv:func:`Sobel`
:ocv:func:`Sobel`
@ -672,16 +672,16 @@ createGaussianFilter
------------------------
Returns an engine for smoothing images with the Gaussian filter.
..ocv:function:: Ptr<FilterEngine> createGaussianFilter( int type, Size ksize, double sigmaX, double sigmaY=0, int borderType=BORDER_DEFAULT)
..ocv:function:: Ptr<FilterEngine> createGaussianFilter( int type, Size ksize, double sigma1, double sigma2=0, int borderType=BORDER_DEFAULT )
:param type:Source and destination image type.
:param ksize:Aperture size. See :ocv:func:`getGaussianKernel` .
:param sigmaX:Gaussian sigma in the horizontal direction. See :ocv:func:`getGaussianKernel` .
:param sigmaY:Gaussian sigma in the vertical direction. If 0, then :math:`\texttt{sigmaY}\leftarrow\texttt{sigmaX}` .
:param borderType:Border type to use. See :ocv:func:`borderInterpolate` .
The function :ocv:func:`createGaussianFilter` computes Gaussian kernel coefficients and then returns a separable linear filter for that kernel. The function is used by
@ -693,7 +693,7 @@ The function :ocv:func:`createGaussianFilter` computes Gaussian kernel coefficie
:ocv:func:`createSeparableLinearFilter`,
:ocv:func:`getGaussianKernel`,
:ocv:func:`GaussianBlur`
:ocv:func:`GaussianBlur`
@ -701,14 +701,14 @@ createLinearFilter
----------------------
Creates a non-separable linear filter engine.
..ocv:function:: Ptr<FilterEngine> createLinearFilter(int srcType, int dstType, InputArray kernel, Point _anchor=Point(-1,-1), double delta=0, int rowBorderType=BORDER_DEFAULT, int columnBorderType=-1, const Scalar& borderValue=Scalar())
..ocv:function:: Ptr<FilterEngine> createLinearFilter(int srcType, int dstType, InputArray kernel, Point _anchor=Point(-1,-1), double delta=0, int _rowBorderType=BORDER_DEFAULT, int _columnBorderType=-1, const Scalar& _borderValue=Scalar())
..ocv:function:: Ptr<BaseFilter> getLinearFilter(int srcType, int dstType, InputArray kernel, Point anchor=Point(-1,-1), double delta=0, int bits=0)
..ocv:function:: Ptr<BaseFilter> getLinearFilter(int srcType, int dstType, InputArray kernel, Point anchor=Point(-1,-1), double delta=0, int bits=0)
:param srcType:Source image type.
:param dstType:Destination image type that must have as many channels as ``srcType`` .
:param kernel:2D array of filter coefficients.
:param anchor:Anchor point within the kernel. Special value ``Point(-1,-1)`` means that the anchor is at the kernel center.
@ -718,9 +718,9 @@ Creates a non-separable linear filter engine.
:param bits:Number of the fractional bits. The parameter is used when the kernel is an integer matrix representing fixed-point filter coefficients.
:param rowBorderType:Pixel extrapolation method in the vertical direction. For details, see :ocv:func:`borderInterpolate`.
:param columnBorderType:Pixel extrapolation method in the horizontal direction.
:param borderValue:Border value used in case of a constant border.
The function returns a pointer to a 2D linear filter for the specified kernel, the source array type, and the destination array type. The function is a higher-level function that calls ``getLinearFilter`` and passes the retrieved 2D filter to the
@ -737,13 +737,13 @@ createMorphologyFilter
--------------------------
Creates an engine for non-separable morphological operations.
..ocv:function:: Ptr<FilterEngine> createMorphologyFilter(int op, int type, InputArray element, Point anchor=Point(-1,-1), int rowBorderType=BORDER_CONSTANT, int columnBorderType=-1, const Scalar& borderValue=morphologyDefaultBorderValue())
..ocv:function:: Ptr<FilterEngine> createMorphologyFilter( int op, int type, InputArray kernel, Point anchor=Point(-1,-1), int _rowBorderType=BORDER_CONSTANT, int _columnBorderType=-1, const Scalar& _borderValue=morphologyDefaultBorderValue() )
..ocv:function:: Ptr<BaseFilter> getMorphologyFilter(int op, int type, InputArray element, Point anchor=Point(-1,-1))
..ocv:function:: Ptr<BaseFilter> getMorphologyFilter( int op, int type, InputArray kernel, Point anchor=Point(-1,-1) )
..ocv:function:: Ptr<BaseRowFilter> getMorphologyRowFilter(int op, int type, int esize, int anchor=-1)
..ocv:function:: Ptr<BaseRowFilter> getMorphologyRowFilter( int op, int type, int ksize, int anchor=-1 )
..ocv:function:: Ptr<BaseColumnFilter> getMorphologyColumnFilter(int op, int type, int esize, int anchor=-1)
..ocv:function:: Ptr<BaseColumnFilter> getMorphologyColumnFilter( int op, int type, int ksize, int anchor=-1 )
@ -758,9 +758,9 @@ Creates an engine for non-separable morphological operations.
:param anchor:Anchor position within the structuring element. Negative values mean that the anchor is at the kernel center.
:param rowBorderType:Pixel extrapolation method in the vertical direction. For details, see :ocv:func:`borderInterpolate`.
:param columnBorderType:Pixel extrapolation method in the horizontal direction.
:param borderValue:Border value in case of a constant border. The default value, \ ``morphologyDefaultBorderValue`` , has a special meaning. It is transformed :math:`+\inf` for the erosion and to :math:`-\inf` for the dilation, which means that the minimum (maximum) is effectively computed only over the pixels that are inside the image.
The functions construct primitive morphological filtering operations or a filter engine based on them. Normally it is enough to use
@ -783,18 +783,18 @@ createSeparableLinearFilter
-------------------------------
Creates an engine for a separable linear filter.
..ocv:function:: Ptr<FilterEngine> createSeparableLinearFilter(int srcType, int dstType, InputArray rowKernel, InputArray columnKernel, Point anchor=Point(-1,-1), double delta=0, int rowBorderType=BORDER_DEFAULT, int columnBorderType=-1, const Scalar& borderValue=Scalar())
..ocv:function:: Ptr<FilterEngine> createSeparableLinearFilter(int srcType, int dstType, InputArray rowKernel, InputArray columnKernel, Point _anchor=Point(-1,-1), double delta=0, int _rowBorderType=BORDER_DEFAULT, int _columnBorderType=-1, const Scalar& _borderValue=Scalar())
..ocv:function:: Ptr<BaseColumnFilter> getLinearColumnFilter(int bufType, int dstType, InputArray columnKernel, int anchor, int symmetryType, double delta=0, int bits=0)
..ocv:function:: Ptr<BaseColumnFilter> getLinearColumnFilter( int bufType, int dstType, InputArray kernel, int anchor, int symmetryType, double delta=0, int bits=0)
..ocv:function:: Ptr<BaseRowFilter> getLinearRowFilter(int srcType, int bufType, InputArray rowKernel, int anchor, int symmetryType)
..ocv:function:: Ptr<BaseRowFilter> getLinearRowFilter( int srcType, int bufType, InputArray kernel, int anchor, int symmetryType )
:param srcType:Source array type.
:param dstType:Destination image type that must have as many channels as ``srcType`` .
:param bufType:Intermediate buffer type that must have as many channels as ``srcType`` .
:param rowKernel:Coefficients for filtering each row.
:param columnKernel:Coefficients for filtering each column.
@ -806,12 +806,12 @@ Creates an engine for a separable linear filter.
:param bits:Number of the fractional bits. The parameter is used when the kernel is an integer matrix representing fixed-point filter coefficients.
:param rowBorderType:Pixel extrapolation method in the vertical direction. For details, see :ocv:func:`borderInterpolate`.
:param columnBorderType:Pixel extrapolation method in the horizontal direction.
:param borderValue:Border value used in case of a constant border.
:param symmetryType:Type of each row and column kernel. See :ocv:func:`getKernelType` .
:param symmetryType:Type of each row and column kernel. See :ocv:func:`getKernelType` .
The functions construct primitive separable linear filtering operations or a filter engine based on them. Normally it is enough to use
:ocv:func:`createSeparableLinearFilter` or even higher-level
@ -831,7 +831,7 @@ dilate
----------
Dilates an image by using a specific structuring element.
..ocv:function:: void dilate( InputArray src, OutputArray dst, InputArray element, Point anchor=Point(-1,-1), int iterations=1, int borderType=BORDER_CONSTANT, const Scalar& borderValue=morphologyDefaultBorderValue() )
..ocv:function:: void dilate( InputArray src, OutputArray dst, InputArray kernel, Point anchor=Point(-1,-1), int iterations=1, int borderType=BORDER_CONSTANT, const Scalar& borderValue=morphologyDefaultBorderValue() )
@ -841,7 +841,7 @@ Dilates an image by using a specific structuring element.
:param src:Source image. The number of channels can be arbitrary. The depth should be one of ``CV_8U``, ``CV_16U``, ``CV_16S``, ``CV_32F` or ``CV_64F``.
:param dst:Destination image of the same size and type as ``src`` .
:param element:Structuring element used for dilation. If ``element=Mat()`` , a ``3 x 3`` rectangular structuring element is used.
:param anchor:Position of the anchor within the element. The default value ``(-1, -1)`` means that the anchor is at the element center.
@ -849,9 +849,9 @@ Dilates an image by using a specific structuring element.
:param iterations:Number of times dilation is applied.
:param borderType:Pixel extrapolation method. See :ocv:func:`borderInterpolate` for details.
:param borderValue:Border value in case of a constant border. The default value has a special meaning. See :ocv:func:`createMorphologyFilter` for details.
The function dilates the source image using the specified structuring element that determines the shape of a pixel neighborhood over which the maximum is taken:
..math::
@ -871,7 +871,7 @@ erode
---------
Erodes an image by using a specific structuring element.
..ocv:function:: void erode( InputArray src, OutputArray dst, InputArray element, Point anchor=Point(-1,-1), int iterations=1, int borderType=BORDER_CONSTANT, const Scalar& borderValue=morphologyDefaultBorderValue() )
..ocv:function:: void erode( InputArray src, OutputArray dst, InputArray kernel, Point anchor=Point(-1,-1), int iterations=1, int borderType=BORDER_CONSTANT, const Scalar& borderValue=morphologyDefaultBorderValue() )
@ -881,7 +881,7 @@ Erodes an image by using a specific structuring element.
:param src:Source image. The number of channels can be arbitrary. The depth should be one of ``CV_8U``, ``CV_16U``, ``CV_16S``, ``CV_32F` or ``CV_64F``.
:param dst:Destination image of the same size and type as ``src``.
:param element:Structuring element used for erosion. If ``element=Mat()`` , a ``3 x 3`` rectangular structuring element is used.
:param anchor:Position of the anchor within the element. The default value ``(-1, -1)`` means that the anchor is at the element center.
@ -889,9 +889,9 @@ Erodes an image by using a specific structuring element.
:param iterations:Number of times erosion is applied.
:param borderType:Pixel extrapolation method. See :ocv:func:`borderInterpolate` for details.
:param borderValue:Border value in case of a constant border. The default value has a special meaning. See :ocv:func:`createMorphologyFilter` for details.
The function erodes the source image using the specified structuring element that determines the shape of a pixel neighborhood over which the minimum is taken:
..math::
@ -916,27 +916,28 @@ Convolves an image with the kernel.
:param dst:Destination image of the same size and the same number of channels as ``src`` .
:param ddepth:Desired depth of the destination image. If it is negative, it will be the same as ``src.depth()`` . The following combination of ``src.depth()`` and ``ddepth`` are supported:
when ``ddepth=-1``, the destination image will have the same depth as the source.
:param kernel:Convolution kernel (or rather a correlation kernel), a single-channel floating point matrix. If you want to apply different kernels to different channels, split the image into separate color planes using :ocv:func:`split` and process them individually.
:param anchor:Anchor of the kernel that indicates the relative position of a filtered point within the kernel. The anchor should lie within the kernel. The special default value (-1,-1) means that the anchor is at the kernel center.
:param delta:Optional value added to the filtered pixels before storing them in ``dst`` .
:param borderType:Pixel extrapolation method. See :ocv:func:`borderInterpolate` for details.
The function applies an arbitrary linear filter to an image. In-place operation is supported. When the aperture is partially outside the image, the function interpolates outlier pixel values according to the specified border mode.
@ -972,13 +973,13 @@ Smoothes an image using a Gaussian filter.
:param src:Source image. The image can have any number of channels, which are processed independently. The depth should be ``CV_8U``, ``CV_16U``, ``CV_16S``, ``CV_32F`` or ``CV_64F``.
:param dst:Destination image of the same size and type as ``src`` .
:param ksize:Gaussian kernel size. ``ksize.width`` and ``ksize.height`` can differ but they both must be positive and odd. Or, they can be zero's and then they are computed from ``sigma*`` .
:param sigmaX:Gaussian kernel standard deviation in X direction.
:param sigmaY:Gaussian kernel standard deviation in Y direction. If ``sigmaY`` is zero, it is set to be equal to ``sigmaX`` . If both sigmas are zeros, they are computed from ``ksize.width`` and ``ksize.height`` , respectively. See :ocv:func:`getGaussianKernel` for details. To fully control the result regardless of possible future modifications of all this semantics, it is recommended to specify all of ``ksize`` , ``sigmaX`` , and ``sigmaY`` .
:param borderType:Pixel extrapolation method. See :ocv:func:`borderInterpolate` for details.
The function convolves the source image with the specified Gaussian kernel. In-place filtering is supported.
:param kx:Output matrix of row filter coefficients. It has the type ``ktype`` .
:param ky:Output matrix of column filter coefficients. It has the type ``ktype`` .
:param dx:Derivative order in respect of x.
:param dy:Derivative order in respect of y.
@ -1060,7 +1061,7 @@ Two of such generated kernels can be passed to
:ocv:func:`createSeparableLinearFilter`,
:ocv:func:`getDerivKernels`,
:ocv:func:`getStructuringElement`,
:ocv:func:`GaussianBlur`
:ocv:func:`GaussianBlur`
@ -1084,7 +1085,7 @@ The function analyzes the kernel coefficients and returns the corresponding kern
* **KERNEL_SMOOTH** All the kernel elements are non-negative and summed to 1. For example, the Gaussian kernel is both smooth kernel and symmetrical, so the function returns ``KERNEL_SMOOTH | KERNEL_SYMMETRICAL`` .
* **KERNEL_INTEGER** All the kernel coefficients are integer numbers. This flag can be combined with ``KERNEL_SYMMETRICAL`` or ``KERNEL_ASYMMETRICAL`` .
getStructuringElement
@ -1095,7 +1096,7 @@ Returns a structuring element of the specified size and shape for morphological
@ -1108,27 +1109,27 @@ Returns a structuring element of the specified size and shape for morphological
E_{ij}=1
* **MORPH_ELLIPSE** - an elliptic structuring element, that is, a filled ellipse inscribed into the rectangle ``Rect(0, 0, esize.width, 0.esize.height)``
* **MORPH_CROSS** - a cross-shaped structuring element:
..math::
E_{ij} = \fork{1}{if i=\texttt{anchor.y} or j=\texttt{anchor.x}}{0}{otherwise}
* **CV_SHAPE_CUSTOM** - custom structuring element (OpenCV 1.x API)
:param ksize:Size of the structuring element.
:param cols:Width of the structuring element
:param rows:Height of the structuring element
:param anchor:Anchor position within the element. The default value :math:`(-1, -1)` means that the anchor is at the center. Note that only the shape of a cross-shaped element depends on the anchor position. In other cases the anchor just regulates how much the result of the morphological operation is shifted.
:param anchorX:x-coordinate of the anchor
:param anchorY:y-coordinate of the anchor
:param values:integer array of ``cols``*``rows`` elements that specifies the custom shape of the structuring element, when ``shape=CV_SHAPE_CUSTOM``.
The function constructs and returns the structuring element that can be further passed to
@ -1149,9 +1150,9 @@ Smoothes an image using the median filter.
:param src:Source 1-, 3-, or 4-channel image. When ``ksize`` is 3 or 5, the image depth should be ``CV_8U`` , ``CV_16U`` , or ``CV_32F`` . For larger aperture sizes, it can only be ``CV_8U`` .
:param dst:Destination array of the same size and type as ``src`` .
:param ksize:Aperture linear size. It must be odd and greater than 1, for example: 3, 5, 7 ...
The function smoothes an image using the median filter with the
@ -1170,7 +1171,7 @@ morphologyEx
----------------
Performs advanced morphological transformations.
..ocv:function:: void morphologyEx( InputArray src, OutputArray dst, int op, InputArray element, Point anchor=Point(-1,-1), int iterations=1, int borderType=BORDER_CONSTANT, const Scalar& borderValue=morphologyDefaultBorderValue() )
..ocv:function:: void morphologyEx( InputArray src, OutputArray dst, int op, InputArray kernel, Point anchor=Point(-1,-1), int iterations=1, int borderType=BORDER_CONSTANT, const Scalar& borderValue=morphologyDefaultBorderValue() )
:param src:Source image. The number of channels can be arbitrary. The depth should be one of ``CV_8U``, ``CV_16U``, ``CV_16S``, ``CV_32F` or ``CV_64F``.
:param dst:Destination image of the same size and type as ``src`` .
:param element:Structuring element.
:param op:Type of a morphological operation that can be one of the following:
:param iterations:Number of times erosion and dilation are applied.
:param borderType:Pixel extrapolation method. See :ocv:func:`borderInterpolate` for details.
:param borderValue:Border value in case of a constant border. The default value has a special meaning. See :ocv:func:`createMorphologyFilter` for details.
The function can perform advanced morphological transformations using an erosion and dilation as basic operations.
@ -1242,7 +1243,6 @@ Any of the operations can be done in-place. In case of multi-channel images, eac
:ocv:func:`createMorphologyFilter`
Laplacian
-------------
Calculates the Laplacian of an image.
@ -1251,14 +1251,14 @@ Calculates the Laplacian of an image.
:param dst:Destination image of the same size and the same number of channels as ``src`` .
:param ddepth:Desired depth of the destination image.
:param ksize:Aperture size used to compute the second-derivative filters. See :ocv:func:`getDerivKernels` for details. The size must be positive and odd.
@ -1266,7 +1266,7 @@ Calculates the Laplacian of an image.
:param scale:Optional scale factor for the computed Laplacian values. By default, no scaling is applied. See :ocv:func:`getDerivKernels` for details.
:param delta:Optional delta value that is added to the results prior to storing them in ``dst`` .
:param borderType:Pixel extrapolation method. See :ocv:func:`borderInterpolate` for details.
The function calculates the Laplacian of the source image by adding up the second x and y derivatives calculated using the Sobel operator:
@ -1304,7 +1304,7 @@ Smoothes an image and downsamples it.
:param src:Source image.
:param dst:Destination image. It has the specified size and the same type as ``src`` .
:param dstsize:Size of the destination image. By default, it is computed as ``Size((src.cols+1)/2, (src.rows+1)/2)`` . But in any case, the following conditions should be satisfied:
..ocv:cfunction:: cvPyrUp( const CvArr* src, CvArr* dst, int filter=CV_GAUSSIAN_5x5 )
@ -1337,7 +1337,7 @@ Upsamples an image and then smoothes it.
:param src:Source image.
:param dst:Destination image. It has the specified size and the same type as ``src`` .
:param dstsize:Size of the destination image. By default, it is computed as ``Size(src.cols*2, (src.rows*2)`` . But in any case, the following conditions should be satisfied:
..math::
@ -1353,46 +1353,46 @@ pyrMeanShiftFiltering
---------------------
Performs initial step of meanshift segmentation of an image.
:param dst:The destination image of the same format and the same size as the source.
:param sp:The spatial window radius.
:param sr:The color window radius.
:param maxLevel:Maximum level of the pyramid for the segmentation.
:param termcrit:Termination criteria: when to stop meanshift iterations.
The function implements the filtering stage of meanshift segmentation, that is, the output of the function is the filtered "posterized" image with color gradients and fine-grain texture flattened. At every pixel
:param dst:The destination image of the same format and the same size as the source.
:param sp:The spatial window radius.
:param sr:The color window radius.
:param maxLevel:Maximum level of the pyramid for the segmentation.
:param termcrit:Termination criteria: when to stop meanshift iterations.
The function implements the filtering stage of meanshift segmentation, that is, the output of the function is the filtered "posterized" image with color gradients and fine-grain texture flattened. At every pixel
``(X,Y)`` of the input image (or down-sized input image, see below) the function executes meanshift
iterations, that is, the pixel ``(X,Y)`` neighborhood in the joint space-color hyperspace is considered:
..math::
(x,y): X- \texttt{sp} \le x \le X+ \texttt{sp} , Y- \texttt{sp} \le y \le Y+ \texttt{sp} , ||(R,G,B)-(r,g,b)|| \le \texttt{sr}
(x,y): X- \texttt{sp} \le x \le X+ \texttt{sp} , Y- \texttt{sp} \le y \le Y+ \texttt{sp} , ||(R,G,B)-(r,g,b)|| \le \texttt{sr}
where ``(R,G,B)`` and ``(r,g,b)`` are the vectors of color components at ``(X,Y)`` and ``(x,y)``, respectively (though, the algorithm does not depend on the color space used, so any 3-component color space can be used instead). Over the neighborhood the average spatial value ``(X',Y')`` and average color vector ``(R',G',B')`` are found and they act as the neighborhood center on the next iteration:
where ``(R,G,B)`` and ``(r,g,b)`` are the vectors of color components at ``(X,Y)`` and ``(x,y)``, respectively (though, the algorithm does not depend on the color space used, so any 3-component color space can be used instead). Over the neighborhood the average spatial value ``(X',Y')`` and average color vector ``(R',G',B')`` are found and they act as the neighborhood center on the next iteration:
..math::
(X,Y)~(X',Y'), (R,G,B)~(R',G',B').
After the iterations over, the color components of the initial pixel (that is, the pixel from where the iterations started) are set to the final value (average color at the last iteration):
After the iterations over, the color components of the initial pixel (that is, the pixel from where the iterations started) are set to the final value (average color at the last iteration):
..math::
I(X,Y) <- (R*,G*,B*)
When ``maxLevel > 0``, the gaussian pyramid of ``maxLevel+1`` levels is built, and the above procedure is run on the smallest layer first. After that, the results are propagated to the larger layer and the iterations are run again only on those pixels where the layer colors differ by more than ``sr`` from the lower-resolution layer of the pyramid. That makes boundaries of color regions sharper. Note that the results will be actually different from the ones obtained by running the meanshift procedure on the whole original image (i.e. when ``maxLevel==0``).
@ -1402,20 +1402,20 @@ sepFilter2D
---------------
Applies a separable linear filter to an image.
..ocv:function:: void sepFilter2D( InputArray src, OutputArray dst, int ddepth, InputArray rowKernel, InputArray columnKernel, Point anchor=Point(-1,-1), double delta=0, int borderType=BORDER_DEFAULT )
..ocv:function:: void sepFilter2D( InputArray src, OutputArray dst, int ddepth, InputArray kernelX, InputArray kernelY, Point anchor=Point(-1,-1), double delta=0, int borderType=BORDER_DEFAULT )
* **CV_BLUR_NO_SCALE** linear convolution with :math:`\texttt{param1}\times\texttt{param2}` box kernel (all 1's). If you want to smooth different pixels with different-size box kernels, you can use the integral image that is computed using :ocv:func:`integral`
* **CV_BLUR** linear convolution with :math:`\texttt{param1}\times\texttt{param2}` box kernel (all 1's) with subsequent scaling by :math:`1/(\texttt{param1}\cdot\texttt{param2})`
* **CV_GAUSSIAN** linear convolution with a :math:`\texttt{param1}\times\texttt{param2}` Gaussian kernel
* **CV_MEDIAN** median filter with a :math:`\texttt{param1}\times\texttt{param1}` square aperture
* **CV_BILATERAL** bilateral filter with a :math:`\texttt{param1}\times\texttt{param1}` square aperture, color sigma= ``param3`` and spatial sigma= ``param4`` . If ``param1=0`` , the aperture square side is set to ``cvRound(param4*1.5)*2+1`` . Information about bilateral filtering can be found at http://www.dai.ed.ac.uk/CVonline/LOCAL\_COPIES/MANDUCHI1/Bilateral\_Filtering.html
:param param1:The first parameter of the smoothing operation, the aperture width. Must be a positive odd number (1, 3, 5, ...)
:param param2:The second parameter of the smoothing operation, the aperture height. Ignored by ``CV_MEDIAN`` and ``CV_BILATERAL`` methods. In the case of simple scaled/non-scaled and Gaussian blur if ``param2`` is zero, it is set to ``param1`` . Otherwise it must be a positive odd number.
:param param3:In the case of a Gaussian parameter this parameter may specify Gaussian :math:`\sigma` (standard deviation). If it is zero, it is calculated from the kernel size:
:param src:The source image
:param dst:The destination image
:param smoothtype:Type of the smoothing:
* **CV_BLUR_NO_SCALE** linear convolution with :math:`\texttt{param1}\times\texttt{param2}` box kernel (all 1's). If you want to smooth different pixels with different-size box kernels, you can use the integral image that is computed using :ocv:func:`integral`
* **CV_BLUR** linear convolution with :math:`\texttt{param1}\times\texttt{param2}` box kernel (all 1's) with subsequent scaling by :math:`1/(\texttt{param1}\cdot\texttt{param2})`
* **CV_GAUSSIAN** linear convolution with a :math:`\texttt{param1}\times\texttt{param2}` Gaussian kernel
* **CV_MEDIAN** median filter with a :math:`\texttt{param1}\times\texttt{param1}` square aperture
* **CV_BILATERAL** bilateral filter with a :math:`\texttt{param1}\times\texttt{param1}` square aperture, color sigma= ``param3`` and spatial sigma= ``param4`` . If ``param1=0`` , the aperture square side is set to ``cvRound(param4*1.5)*2+1`` . Information about bilateral filtering can be found at http://www.dai.ed.ac.uk/CVonline/LOCAL\_COPIES/MANDUCHI1/Bilateral\_Filtering.html
:param param1:The first parameter of the smoothing operation, the aperture width. Must be a positive odd number (1, 3, 5, ...)
:param param2:The second parameter of the smoothing operation, the aperture height. Ignored by ``CV_MEDIAN`` and ``CV_BILATERAL`` methods. In the case of simple scaled/non-scaled and Gaussian blur if ``param2`` is zero, it is set to ``param1`` . Otherwise it must be a positive odd number.
:param param3:In the case of a Gaussian parameter this parameter may specify Gaussian :math:`\sigma` (standard deviation). If it is zero, it is calculated from the kernel size:
Using standard sigma for small kernels ( :math:`3\times 3` to :math:`7\times 7` ) gives better speed. If ``param3`` is not zero, while ``param1`` and ``param2`` are zeros, the kernel size is calculated from the sigma (to provide accurate enough operation).
Using standard sigma for small kernels ( :math:`3\times 3` to :math:`7\times 7` ) gives better speed. If ``param3`` is not zero, while ``param1`` and ``param2`` are zeros, the kernel size is calculated from the sigma (to provide accurate enough operation).
The function smooths an image using one of several methods. Every of the methods has some features and restrictions listed below:
* Blur with no scaling works with single-channel images only and supports accumulation of 8-bit to 16-bit format (similar to :ocv:func:`Sobel` and :ocv:func:`Laplacian`) and 32-bit floating point to 32-bit floating-point format.
@ -1496,23 +1496,24 @@ Sobel
---------
Calculates the first, second, third, or mixed image derivatives using an extended Sobel operator.
..ocv:function:: void Sobel( InputArray src, OutputArray dst, int ddepth, int xorder, int yorder, int ksize=3, double scale=1, double delta=0, int borderType=BORDER_DEFAULT )
..ocv:function:: void Sobel( InputArray src, OutputArray dst, int ddepth, int dx, int dy, int ksize=3, double scale=1, double delta=0, int borderType=BORDER_DEFAULT )
when ``ddepth=-1``, the destination image will have the same depth as the source. In the case of 8-bit input images it will result in truncated derivatives.
:param xorder:Order of the derivative x.
@ -1524,7 +1525,7 @@ Calculates the first, second, third, or mixed image derivatives using an extende
:param scale:Optional scale factor for the computed derivative values. By default, no scaling is applied. See :ocv:func:`getDerivKernels` for details.
:param delta:Optional delta value that is added to the results prior to storing them in ``dst`` .
:param borderType:Pixel extrapolation method. See :ocv:func:`borderInterpolate` for details.
In all cases except one, the
@ -1538,7 +1539,7 @@ derivative. When
There is also the special value ``ksize = CV_SCHARR`` (-1) that corresponds to the
:math:`3\times3` Scharr
filter that may give more accurate results than the
:math:`3\times3` Sobel. The Scharr aperture is
:math:`3\times3` Sobel. The Scharr aperture is
..math::
@ -1582,14 +1583,14 @@ Scharr
----------
Calculates the first x- or y- image derivative using Scharr operator.
..ocv:function:: void Scharr( InputArray src, OutputArray dst, int ddepth, int xorder, int yorder, double scale=1, double delta=0, int borderType=BORDER_DEFAULT )
..ocv:function:: void Scharr( InputArray src, OutputArray dst, int ddepth, int dx, int dy, double scale=1, double delta=0, int borderType=BORDER_DEFAULT )
@ -262,19 +266,19 @@ Remaps an image to log-polar space.
..ocv:pyoldfunction:: cv.LogPolar(src, dst, center, M, flags=CV_INNER_LINEAR+CV_WARP_FILL_OUTLIERS)-> None
:param src:Source image
:param src:Source image
:param dst:Destination image
:param dst:Destination image
:param center:The transformation center; where the output precision is maximal
:param center:The transformation center; where the output precision is maximal
:param M:Magnitude scale parameter. See below
:param M:Magnitude scale parameter. See below
:param flags:A combination of interpolation methods and the following optional flags:
* **CV_WARP_FILL_OUTLIERS** fills all of the destination image pixels. If some of them correspond to outliers in the source image, they are set to zero
* **CV_WARP_INVERSE_MAP** See below
:param flags:A combination of interpolation methods and the following optional flags:
* **CV_WARP_FILL_OUTLIERS** fills all of the destination image pixels. If some of them correspond to outliers in the source image, they are set to zero
* **CV_WARP_INVERSE_MAP** See below
The function ``cvLogPolar`` transforms the source image using the following transformation:
@ -283,7 +287,7 @@ The function ``cvLogPolar`` transforms the source image using the following tran
..math::
dst( \phi , \rho ) = src(x,y)
dst( \phi , \rho ) = src(x,y)
*
@ -291,14 +295,14 @@ The function ``cvLogPolar`` transforms the source image using the following tran
..math::
dst(x,y) = src( \phi , \rho )
dst(x,y) = src( \phi , \rho )
where
..math::
\rho = M \cdot \log{\sqrt{x^2 + y^2}} , \phi =atan(y/x)
\rho = M \cdot \log{\sqrt{x^2 + y^2}} , \phi =atan(y/x)
The function emulates the human "foveal" vision and can be used for fast scale and rotation-invariant template matching, for object tracking and so forth. The function can not operate in-place.
:param cameraMatrix:Input camera matrix :math:`A=\vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}` .
:param distCoeffs:Input vector of distortion coefficients :math:`(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]])` of 4, 5, or 8 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.
:param R:Optional rectification transformation in the object space (3x3 matrix). ``R1`` or ``R2`` , computed by :ocv:func:`stereoRectify` can be passed here. If the matrix is empty, the identity transformation is assumed. In ``cvInitUndistortMap`` R assumed to be an identity matrix.
:param newCameraMatrix:New camera matrix :math:`A'=\vecthreethree{f_x'}{0}{c_x'}{0}{f_y'}{c_y'}{0}{0}{1}` .
:param size:Undistorted image size.
:param m1type:Type of the first output map that can be ``CV_32FC1`` or ``CV_16SC2`` . See :ocv:func:`convertMaps` for details.
:param map1:The first output map.
:param map2:The second output map.
@ -606,7 +613,7 @@ where
:math:`(0,0)` and
:math:`(1,1)` elements of ``cameraMatrix`` , respectively.
By default, the undistortion functions in OpenCV (see
By default, the undistortion functions in OpenCV (see
:ocv:func:`initUndistortRectifyMap`,
:ocv:func:`undistort`) do not move the principal point. However, when you work with stereo, it is important to move the principal points in both views to the same y-coordinate (which is required by most of stereo correspondence algorithms), and may be to the same x-coordinate too. So, you can form the new camera matrix for each view where the principal points are located at the center.
@ -621,16 +628,16 @@ Transforms an image to compensate for lens distortion.
:param dst:Output (corrected) image that has the same size and type as ``src`` .
:param cameraMatrix:Input camera matrix :math:`A = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}` .
:param distCoeffs:Input vector of distortion coefficients :math:`(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]])` of 4, 5, or 8 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.
:param newCameraMatrix:Camera matrix of the distorted image. By default, it is the same as ``cameraMatrix`` but you may additionally scale and shift the result by using a different matrix.
@ -660,7 +667,7 @@ Computes the ideal point coordinates from the observed point coordinates.
:param src:Observed point coordinates, 1xN or Nx1 2-channel (CV_32FC2 or CV_64FC2).
@ -668,7 +675,7 @@ Computes the ideal point coordinates from the observed point coordinates.
:param dst:Output ideal point coordinates after undistortion and reverse perspective transformation. If matrix ``P`` is identity or omitted, ``dst`` will contain normalized point coordinates.
:param distCoeffs:Input vector of distortion coefficients :math:`(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]])` of 4, 5, or 8 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.
:param R:Rectification transformation in the object space (3x3 matrix). ``R1`` or ``R2`` computed by :ocv:func:`stereoRectify` can be passed here. If the matrix is empty, the identity transformation is used.
@ -696,4 +703,4 @@ where ``undistort()`` is an approximate iterative algorithm that estimates the n
The function can be used for both a stereo camera head or a monocular camera (when R is empty).
:param arrays:Source arrays. They all should have the same depth, ``CV_8U`` or ``CV_32F`` , and the same size. Each of them can have an arbitrary number of channels.
@ -124,9 +124,9 @@ Calculates the back projection of a histogram.
:param hist:Input histogram that can be dense or sparse.
:param backProject:Destination back projection array that is a single-channel array of the same size and depth as ``arrays[0]`` .
:param ranges:Array of arrays of the histogram bin boundaries in each dimension. See :ocv:func:`calcHist` .
:param scale:Optional scale factor for the output back projection.
:param uniform:Flag indicating whether the histogram is uniform or not (see above).
@ -164,7 +164,7 @@ Compares two histograms.
:param H1:First compared histogram.
:param H2:Second compared histogram of the same size as ``H1`` .
:param method:Comparison method that could be one of the following:
* **CV_COMP_CORREL** Correlation
@ -225,8 +225,9 @@ Computes the "minimal work" distance between two weighted point configurations.
:param signature1:First signature, a :math:`\texttt{size1}\times \texttt{dims}+1` floating-point matrix. Each row stores the point weight followed by the point coordinates. The matrix is allowed to have a single column (weights only) if the user-defined cost matrix is used.
@ -235,18 +236,18 @@ Computes the "minimal work" distance between two weighted point configurations.
:param distType:Used metric. ``CV_DIST_L1, CV_DIST_L2`` , and ``CV_DIST_C`` stand for one of the standard metrics. ``CV_DIST_USER`` means that a pre-calculated cost matrix ``cost`` is used.
:param distFunc:Custom distance function supported by the old interface. ``CvDistanceFunction`` is defined as: ::
typedef float (CV_CDECL * CvDistanceFunction)( const float* a,
const float* b, void* userdata );
where ``a`` and ``b`` are point coordinates and ``userdata`` is the same as the last parameter.
:param cost:User-defined :math:`\texttt{size1}\times \texttt{size2}` cost matrix. Also, if a cost matrix is used, lower boundary ``lowerBound`` cannot be calculated because it needs a metric function.
:param lowerBound:Optional input/output parameter: lower boundary of a distance between the two signatures that is a distance between mass centers. The lower boundary may not be calculated if the user-defined cost matrix is used, the total weights of point configurations are not equal, or if the signatures consist of weights only (the signature matrices have a single column). You **must** initialize ``*lowerBound`` . If the calculated distance between mass centers is greater or equal to ``*lowerBound`` (it means that the signatures are far enough), the function does not calculate EMD. In any case ``*lowerBound`` is set to the calculated distance between mass centers on return. Thus, if you want to calculate both distance between mass centers and EMD, ``*lowerBound`` should be set to 0.
:param flow:Resultant :math:`\texttt{size1} \times \texttt{size2}` flow matrix: :math:`\texttt{flow}_{i,j}` is a flow from :math:`i` -th point of ``signature1`` to :math:`j` -th point of ``signature2`` .
:param userdata:Optional pointer directly passed to the custom distance function.
The function computes the earth mover distance and/or a lower boundary of the distance between the two weighted point configurations. One of the applications described in [RubnerSept98]_ is multi-dimensional histogram comparison for image retrieval. EMD is a transportation problem that is solved using some modification of a simplex algorithm, thus the complexity is exponential in the worst case, though, on average it is much faster. In the case of a real metric the lower boundary can be calculated even faster (using linear-time algorithm) and it can be used to determine roughly whether the two signatures are far enough so that they cannot relate to the same object.
@ -301,20 +302,20 @@ Locates a template within an image by using a histogram comparison.
:param images:Source images (though, you may pass CvMat** as well).
:param dst:Destination image.
:param patch_size:Size of the patch slid though the source image.
:param hist:Histogram.
:param method:Comparison method passed to :ocv:cfunc:`CompareHist` (see the function description).
:param factor:Normalization factor for histograms that affects the normalization scale of the destination image. Pass 1 if not sure.
The function calculates the back projection by comparing histograms of the source image patches with the given histogram. The function is similar to :ocv:func:`matchTemplate`, but instead of comparing the raster patch with all its possible positions within the search window, the function ``CalcBackProjectPatch`` compares histograms. See the algorithm diagram below:
The function makes a copy of the histogram. If the second histogram pointer ``*dst`` is NULL, a new histogram of the same size as ``src`` is created. Otherwise, both histograms must have equal types and sizes. Then the function copies the bin values of the source histogram to the destination histogram and sets the same bin value ranges as in ``src``.
.._createhist:
@ -375,33 +376,33 @@ Creates a histogram.
..ocv:cfunction:: CvHistogram* cvCreateHist( int dims, int* sizes, int type, float** ranges=NULL, int uniform=1 )
:param sizes:Array of the histogram dimension sizes.
:param type:Histogram representation format. ``CV_HIST_ARRAY`` means that the histogram data is represented as a multi-dimensional dense array CvMatND. ``CV_HIST_SPARSE`` means that histogram data is represented as a multi-dimensional sparse array ``CvSparseMat``.
:param ranges:Array of ranges for the histogram bins. Its meaning depends on the ``uniform`` parameter value. The ranges are used when the histogram is calculated or backprojected to determine which histogram bin corresponds to which value/tuple of values from the input image(s).
:param dims:Number of histogram dimensions.
:param sizes:Array of the histogram dimension sizes.
:param type:Histogram representation format. ``CV_HIST_ARRAY`` means that the histogram data is represented as a multi-dimensional dense array CvMatND. ``CV_HIST_SPARSE`` means that histogram data is represented as a multi-dimensional sparse array ``CvSparseMat``.
:param ranges:Array of ranges for the histogram bins. Its meaning depends on the ``uniform`` parameter value. The ranges are used when the histogram is calculated or backprojected to determine which histogram bin corresponds to which value/tuple of values from the input image(s).
:param uniform:Uniformity flag. If not zero, the histogram has evenly
spaced bins and for every :math:`0<=i<cDims```ranges[i]``
spaced bins and for every :math:`0<=i<cDims```ranges[i]``
is an array of two numbers: lower and upper boundaries for the i-th
histogram dimension.
The whole range [lower,upper] is then split
into ``dims[i]`` equal parts to determine the ``i``-th input
tuple value ranges for every histogram bin. And if ``uniform=0`` ,
then the ``i``-th element of the ``ranges`` array contains ``dims[i]+1`` elements: :math:`\texttt{lower}_0, \texttt{upper}_0,
then the ``i``-th element of the ``ranges`` array contains ``dims[i]+1`` elements: :math:`\texttt{lower}_0, \texttt{upper}_0,
where :math:`\texttt{lower}_j` and :math:`\texttt{upper}_j`
\texttt{upper}_{dims[i]-1}`
where :math:`\texttt{lower}_j` and :math:`\texttt{upper}_j`
are lower and upper
boundaries of the ``i``-th input tuple value for the ``j``-th
boundaries of the ``i``-th input tuple value for the ``j``-th
bin, respectively. In either case, the input values that are beyond
the specified range for a histogram bin are not counted by :ocv:cfunc:`CalcHist` and filled with 0 by :ocv:cfunc:`CalcBackProject`.
The function creates a histogram of the specified size and returns a pointer to the created histogram. If the array ``ranges`` is 0, the histogram bin ranges must be specified later via the function :ocv:cfunc:`SetHistBinRanges`. Though :ocv:cfunc:`CalcHist` and :ocv:cfunc:`CalcBackProject` may process 8-bit images without setting bin ranges, they assume they are equally spaced in 0 to 255 bins.
@ -416,27 +417,27 @@ Returns a pointer to the histogram bin.
..ocv:cfunction:: float cvGetHistValue_3D(CvHistogram hist, int idx0, int idx1, int idx2)
..ocv:cfunction:: float cvGetHistValue_nD(CvHistogram hist, int idx)
:param min_value:Pointer to the minimum value of the histogram.
:param max_value:Pointer to the maximum value of the histogram.
:param min_idx:Pointer to the array of coordinates for the minimum.
:param max_idx:Pointer to the array of coordinates for the maximum.
:param min_value:Pointer to the minimum value of the histogram.
:param max_value:Pointer to the maximum value of the histogram.
:param min_idx:Pointer to the array of coordinates for the minimum.
:param max_idx:Pointer to the array of coordinates for the maximum.
The function finds the minimum and maximum histogram bins and their positions. All of output arguments are optional. Among several extremas with the same value the ones with the minimum index (in the lexicographical order) are returned. In case of several maximums or minimums, the earliest in the lexicographical order (extrema locations) is returned.
@ -469,19 +470,19 @@ MakeHistHeaderForArray
Makes a histogram out of an array.
..ocv:cfunction:: CvHistogram* cvMakeHistHeaderForArray( int dims, int* sizes, CvHistogram* hist, float* data, float** ranges=NULL, int uniform=1 )
:param dims:Number of the histogram dimensions.
:param sizes:Array of the histogram dimension sizes.
:param hist:Histogram header initialized by the function.
:param data:Array used to store histogram bins.
:param dims:Number of the histogram dimensions.
:param sizes:Array of the histogram dimension sizes.
:param hist:Histogram header initialized by the function.
:param data:Array used to store histogram bins.
:param ranges:Histogram bin ranges. See :ocv:cfunc:`CreateHist` for details.
:param uniform:Uniformity flag. See :ocv:cfunc:`CreateHist` for details.
The function initializes the histogram, whose header and bins are allocated by the user. :ocv:cfunc:`ReleaseHist` does not need to be called afterwards. Only dense histograms can be initialized this way. The function returns ``hist``.
The macros return the value of the specified bin of the 1D, 2D, 3D, or N-D histogram. In case of a sparse histogram, the function returns 0. If the bin is not present in the histogram, no new bin is created.
:param hist:Double pointer to the released histogram.
:param hist:Double pointer to the released histogram.
The function releases the histogram (header and the data). The pointer to the histogram is cleared by the function. If ``*hist`` pointer is already ``NULL``, the function does nothing.
@ -541,12 +542,12 @@ Sets the bounds of the histogram bins.
..ocv:cfunction:: void cvSetHistBinRanges( CvHistogram* hist, float** ranges, int uniform=1 )
:param hist:Histogram.
:param hist:Histogram.
:param ranges:Array of bin ranges arrays. See :ocv:cfunc:`CreateHist` for details.
:param uniform:Uniformity flag. See :ocv:cfunc:`CreateHist` for details.
:param uniform:Uniformity flag. See :ocv:cfunc:`CreateHist` for details.
This is a standalone function for setting bin ranges in the histogram. For a more detailed description of the parameters ``ranges`` and ``uniform``, see the :ocv:cfunc:`CalcHist` function that can initialize the ranges as well. Ranges for the histogram bins must be set before the histogram is calculated or the backproject of the histogram is calculated.
:param contour:Input contour. Currently, only integer point coordinates are allowed.
:param hist:Calculated histogram. It must be two-dimensional.
:param contour:Input contour. Currently, only integer point coordinates are allowed.
:param hist:Calculated histogram. It must be two-dimensional.
The function calculates a 2D pair-wise geometrical histogram (PGH), described in [Iivarinen97]_ for the contour. The algorithm considers every pair of contour
edges. The angle between the edges and the minimum/maximum distances
are determined for every pair. To do this, each of the edges in turn
:param dst:Destination image of the same size and depth as ``src`` .
:param code:Color space conversion code. See the description below.
:param dstCn:Number of channels in the destination image. If the parameter is 0, the number of the channels is derived automatically from ``src`` and ``code`` .
@ -98,7 +99,7 @@ The conventional ranges for R, G, and B channel values are:
0 to 255 for ``CV_8U`` images
*
0 to 65535 for ``CV_16U`` images
0 to 65535 for ``CV_16U`` images
*
0 to 1 for ``CV_32F`` images
@ -414,22 +415,22 @@ Calculates the distance to the closest zero pixel for each pixel of the source i
..ocv:function:: void distanceTransform( InputArray src, OutputArray dst, OutputArray labels, int distanceType, int maskSize, int labelType=DIST_LABEL_CCOMP )
:param dst:Output image with calculated distances. It is a 32-bit floating-point, single-channel image of the same size as ``src`` .
:param distanceType:Type of distance. It can be ``CV_DIST_L1, CV_DIST_L2`` , or ``CV_DIST_C`` .
:param maskSize:Size of the distance transform mask. It can be 3, 5, or ``CV_DIST_MASK_PRECISE`` (the latter option is only supported by the first function). In case of the ``CV_DIST_L1`` or ``CV_DIST_C`` distance type, the parameter is forced to 3 because a :math:`3\times 3` mask gives the same result as :math:`5\times 5` or any larger aperture.
:param labels:Optional output 2D array of labels (the discrete Voronoi diagram). It has the type ``CV_32SC1`` and the same size as ``src`` . See the details below.
:param labelType:Type of the label array to build. If ``labelType==DIST_LABEL_CCOMP`` then each connected component of zeros in ``src`` (as well as all the non-zero pixels closest to the connected component) will be assigned the same label. If ``labelType==DIST_LABEL_PIXEL`` then each zero pixel (and all the non-zero pixels closest to it) gets its own label.
The functions ``distanceTransform`` calculate the approximate or precise
@ -483,18 +484,18 @@ floodFill
-------------
Fills a connected component with the given color.
..ocv:function:: int floodFill( InputOutputArray image, Point seed, Scalar newVal, Rect* rect=0, Scalar loDiff=Scalar(), Scalar upDiff=Scalar(), int flags=4 )
..ocv:function:: int floodFill( InputOutputArray image, Point seedPoint, Scalar newVal, Rect* rect=0, Scalar loDiff=Scalar(), Scalar upDiff=Scalar(), int flags=4 )
..ocv:function:: int floodFill( InputOutputArray image, InputOutputArray mask, Point seed, Scalar newVal, Rect* rect=0, Scalar loDiff=Scalar(), Scalar upDiff=Scalar(), int flags=4 )
..ocv:function:: int floodFill( InputOutputArray image, InputOutputArray mask, Point seedPoint, Scalar newVal, Rect* rect=0, Scalar loDiff=Scalar(), Scalar upDiff=Scalar(), int flags=4 )
:param image:Input/output 1- or 3-channel, 8-bit, or floating-point image. It is modified by the function unless the ``FLOODFILL_MASK_ONLY`` flag is set in the second variant of the function. See the details below.
:param mask:(For the second function only) Operation mask that should be a single-channel 8-bit image, 2 pixels wider and 2 pixels taller. The function uses and updates the mask, so you take responsibility of initializing the ``mask`` content. Flood-filling cannot go across non-zero pixels in the mask. For example, an edge detector output can be used as a mask to stop filling at edges. It is possible to use the same mask in multiple calls to the function to make sure the filled area does not overlap.
:param mask:(For the second function only) Operation mask that should be a single-channel 8-bit image, 2 pixels wider and 2 pixels taller. The function uses and updates the mask, so you take responsibility of initializing the ``mask`` content. Flood-filling cannot go across non-zero pixels in the mask. For example, an edge detector output can be used as a mask to stop filling at edges. It is possible to use the same mask in multiple calls to the function to make sure the filled area does not overlap.
..note:: Since the mask is larger than the filled image, a pixel :math:`(x, y)` in ``image`` corresponds to the pixel :math:`(x+1, y+1)` in the ``mask`` .
@ -517,14 +518,14 @@ Fills a connected component with the given color.
The functions ``floodFill`` fill a connected component starting from the seed point with the specified color. The connectivity is determined by the color/brightness closeness of the neighbor pixels. The pixel at
:math:`(x,y)` is considered to belong to the repainted domain if:
* **GC_PR_BGD** defines a possible foreground pixel.
:param rect:ROI containing a segmented object. The pixels outside of the ROI are marked as "obvious background". The parameter is only used when ``mode==GC_INIT_WITH_RECT`` .
:param bgdModel:Temporary array for the background model. Do not modify it while you are processing the same image.
:param fgdModel:Temporary arrays for the foreground model. Do not modify it while you are processing the same image.
:param iterCount:Number of iterations the algorithm should make before returning the result. Note that the result can be refined with further calls with ``mode==GC_INIT_WITH_MASK`` or ``mode==GC_EVAL`` .
:param mode:Operation mode that could be one of the following:
* **GC_INIT_WITH_RECT** The function initializes the state and the mask using the provided rectangle. After that it runs ``iterCount`` iterations of the algorithm.
..ocv:pyfunction:: cv2.matchTemplate(image, templ, method[, result]) -> result
@ -19,7 +19,7 @@ Compares a template against overlapped image regions.
:param templ:Searched template. It must be not greater than the source image and have the same data type.
:param result:Map of comparison results. It must be single-channel 32-bit floating-point. If ``image`` is :math:`W \times H` and ``templ`` is :math:`w \times h` , then ``result`` is :math:`(W-w+1) \times (H-h+1)` .
:param method:Parameter specifying the comparison method (see below).
The function slides through ``image`` , compares the
:param array:Raster image (single-channel, 8-bit or floating-point 2D array) or an array ( :math:`1 \times N` or :math:`N \times 1` ) of 2D points (``Point`` or ``Point2f`` ).
:param binaryImage:If it is true, all non-zero image pixels are treated as 1's. The parameter is used for images only.
:param moments:Output moments.
The function computes moments, up to the 3rd order, of a vector shape or a rasterized shape. The results are returned in the structure ``Moments`` defined as: ::
@ -30,7 +31,7 @@ The function computes moments, up to the 3rd order, of a vector shape or a raste
:math:`\texttt{nu}_{10}=\texttt{mu}_{10}=\texttt{mu}_{01}=\texttt{mu}_{10}=0` , hence the values are not stored.
The moments of a contour are defined in the same way but computed using the Green's formula (see http://en.wikipedia.org/wiki/Green_theorem). So, due to a limited raster resolution, the moments computed for a contour are slightly different from the moments computed for the same rasterized contour.
@ -89,11 +90,13 @@ HuMoments
-------------
Calculates seven Hu invariants.
..ocv:function:: void HuMoments( const Moments& moments, double* hu )
..ocv:function:: void HuMoments( const Moments& m, OutputArray hu )
..ocv:cfunction:: int cvFindContours( CvArr* image, CvMemStorage* storage, CvSeq** firstContour, int headerSize=sizeof(CvContour), int mode=CV_RETR_LIST, int method=CV_CHAIN_APPROX_SIMPLE, CvPoint offset=cvPoint(0,0) )
..ocv:cfunction:: int cvFindContours( CvArr* image, CvMemStorage* storage, CvSeq** first_contour, int header_size=sizeof(CvContour), int mode=CV_RETR_LIST, int method=CV_CHAIN_APPROX_SIMPLE, CvPoint offset=cvPoint(0,0) )
:param image:Source, an 8-bit single-channel image. Non-zero pixels are treated as 1's. Zero pixels remain 0's, so the image is treated as ``binary`` . You can use :ocv:func:`compare` , :ocv:func:`inRange` , :ocv:func:`threshold` , :ocv:func:`adaptiveThreshold` , :ocv:func:`Canny` , and others to create a binary image out of a grayscale or color one. The function modifies the ``image`` while extracting the contours.
@ -172,7 +175,7 @@ Draws contours outlines or filled contours.
..ocv:cfunction:: CvSeq* cvApproxPoly( const void*curve, int headerSize, CvMemStorage* storage, int method, double epsilon, int recursive=0 )
..ocv:cfunction:: CvSeq* cvApproxPoly( const void*src_seq, int header_size, CvMemStorage* storage, int method, double parameter, int parameter2=0 )
:param curve:Input vector of a 2D point stored in:
* ``std::vector`` or ``Mat`` (C++ interface)
* ``Nx2`` numpy array (Python interface)
* ``CvSeq`` or ````CvMat`` (C interface)
* ``CvSeq`` or ````CvMat`` (C interface)
:param approxCurve:Result of the approximation. The type should match the type of the input curve. In case of C interface the approximated curve is stored in the memory storage and pointer to it is returned.
:param epsilon:Parameter specifying the approximation accuracy. This is the maximum distance between the original curve and its approximation.
:param closed:If true, the approximated curve is closed (its first and last vertices are connected). Otherwise, it is not closed.
:param headerSize:Header size of the approximated curve. Normally, ``sizeof(CvContour)`` is used.
:param storage:Memory storage where the approximated curve is stored.
:param method:Contour approximation algorithm. Only ``CV_POLY_APPROX_DP`` is supported.
:param recursive:Recursion flag. If it is non-zero and ``curve`` is ``CvSeq*``, the function ``cvApproxPoly`` approximates all the contours accessible from ``curve`` by ``h_next`` and ``v_next`` links.
The functions ``approxPolyDP`` approximate a curve or a polygon with another curve/polygon with less vertices so that the distance between them is less or equal to the specified precision. It uses the Douglas-Peucker algorithm
@ -287,22 +290,22 @@ ApproxChains
-------------
Approximates Freeman chain(s) with a polygonal curve.
..ocv:cfunction:: CvSeq* cvApproxChains( CvSeq*chain, CvMemStorage* storage, int method=CV_CHAIN_APPROX_SIMPLE, double parameter=0, int minimalPerimeter=0, int recursive=0 )
:param chain:Pointer to the approximated Freeman chain that can refer to other chains.
:param storage:Storage location for the resulting polylines.
:param method:Approximation method (see the description of the function :ocv:cfunc:`FindContours` ).
:param parameter:Method parameter (not used now).
:param minimalPerimeter:Approximates only those contours whose perimeters are not less than ``minimal_perimeter`` . Other chains are removed from the resulting structure.
..ocv:cfunction:: CvSeq* cvApproxChains( CvSeq*src_seq, CvMemStorage* storage, int method=CV_CHAIN_APPROX_SIMPLE, double parameter=0, int minimal_perimeter=0, int recursive=0 )
:param chain:Pointer to the approximated Freeman chain that can refer to other chains.
:param storage:Storage location for the resulting polylines.
:param method:Approximation method (see the description of the function :ocv:cfunc:`FindContours` ).
:param parameter:Method parameter (not used now).
:param minimalPerimeter:Approximates only those contours whose perimeters are not less than ``minimal_perimeter`` . Other chains are removed from the resulting structure.
:param recursive:Recursion flag. If it is non-zero, the function approximates all chains that can be obtained from ``chain`` by using the ``h_next`` or ``v_next`` links. Otherwise, the single input chain is approximated.
This is a standalone contour approximation routine, not represented in the new interface. When :ocv:cfunc:`FindContours` retrieves contours as Freeman chains, it calls the function to get approximated contours, represented as polygons.
@ -314,8 +317,9 @@ Calculates a contour perimeter or a curve length.
:param points:Input 2D point set, stored in ``std::vector`` or ``Mat``.
:param hull:Output convex hull. It is either an integer vector of indices or vector of points. In the first case, the ``hull`` elements are 0-based indices of the convex hull points in the original array (since the set of convex hull points is a subset of the original point set). In the second case, ``hull`` elements are the convex hull points themselves.
:param storage:Output memory storage in the old API (``cvConvexHull2`` returns a sequence containing the convex hull points or their indices).
:param clockwise:Orientation flag. If it is true, the output convex hull is oriented clockwise. Otherwise, it is oriented counter-clockwise. The usual screen coordinate system is assumed so that the origin is at the top-left corner, x axis is oriented to the right, and y axis is oriented downwards.
:param orientation:Convex hull orientation parameter in the old API, ``CV_CLOCKWISE`` or ``CV_COUNTERCLOCKWISE``.
:param returnPoints:Operation flag. In case of a matrix, when the flag is true, the function returns convex hull points. Otherwise, it returns indices of the convex hull points. When the output array is ``std::vector``, the flag is ignored, and the output depends on the type of the vector: ``std::vector<int>`` implies ``returnPoints=true``, ``std::vector<Point>`` implies ``returnPoints=false``.
The functions find the convex hull of a 2D point set using the Sklansky's algorithm
@ -420,20 +424,20 @@ Finds the convexity defects of a contour.
:param convexhull:Convex hull obtained using :ocv:func:`convexHull` that should contain indices of the contour points that make the hull.
:param contour:Input contour.
:param convexhull:Convex hull obtained using :ocv:func:`convexHull` that should contain indices of the contour points that make the hull.
:param convexityDefects:The output vector of convexity defects. In C++ and the new Python/Java interface each convexity defect is represented as 4-element integer vector (a.k.a. ``cv::Vec4i``): ``(start_index, end_index, farthest_pt_index, fixpt_depth)``, where indices are 0-based indices in the original contour of the convexity defect beginning, end and the farthest point, and ``fixpt_depth`` is fixed-point approximation (with 8 fractional bits) of the distance between the farthest contour point and the hull. That is, to get the floating-point value of the depth will be ``fixpt_depth/256.0``. In C interface convexity defect is represented by ``CvConvexityDefect`` structure - see below.
:param storage:Container for the output sequence of convexity defects. If it is NULL, the contour or hull (in that order) storage is used.
The function finds all convexity defects of the input contour and returns a sequence of the ``CvConvexityDefect`` structures, where ``CvConvexityDetect`` is defined as: ::
struct CvConvexityDefect
@ -460,7 +464,7 @@ Fits an ellipse around a set of 2D points.
:param points:Input vector of 2D points, stored in:
* ``std::vector<>`` or ``Mat`` (C++ interface)
* ``CvSeq*`` or ``CvMat*`` (C interface)
* Nx2 numpy array (Python interface)
The function calculates and returns the minimum-area bounding rectangle (possibly rotated) for a specified point set. See the OpenCV sample ``minarea.cpp`` .
@ -595,18 +600,18 @@ Finds a circle of the minimum area enclosing a 2D point set.
..ocv:function:: void minEnclosingCircle( InputArray points, Point2f& center, float& radius )
..ocv:pyfunction:: cv2.minEnclosingCircle(points, center, radius) -> None
..ocv:pyfunction:: cv2.minEnclosingCircle(points) -> center, radius
..ocv:cfunction:: int cvMinEnclosingCircle( const CvArr* points, CvPoint2D32f* center, float* radius )
..ocv:pyoldfunction:: cv.MinEnclosingCircle(points)-> (int, center, radius)
:param points:Input vector of 2D points, stored in:
:param vely:Vertical component of the optical flow of the same size ``velx`` , 32-bit floating-point, single-channel
:param vely:Vertical component of the optical flow of the same size ``velx`` , 32-bit floating-point, single-channel
The function calculates the optical flow for overlapped blocks ``blockSize.width x blockSize.height`` pixels each, thus the velocity fields are smaller than the original images. For every block in ``prev``
@ -43,23 +43,23 @@ CalcOpticalFlowHS
-----------------
Calculates the optical flow for two images using Horn-Schunck algorithm.
:param usePrevious:Flag that specifies whether to use the input velocity as initial approximations or not.
:param velx:Horizontal component of the optical flow of the same size as input images, 32-bit floating-point, single-channel
:param velx:Horizontal component of the optical flow of the same size as input images, 32-bit floating-point, single-channel
:param vely:Vertical component of the optical flow of the same size as input images, 32-bit floating-point, single-channel
:param vely:Vertical component of the optical flow of the same size as input images, 32-bit floating-point, single-channel
:param lambda:Smoothness weight. The larger it is, the smoother optical flow map you get.
:param criteria:Criteria of termination of velocity computing
:param criteria:Criteria of termination of velocity computing
The function computes the flow for every pixel of the first input image using the Horn and Schunck algorithm [Horn81]_. The function is obsolete. To track sparse features, use :ocv:func:`calcOpticalFlowPyrLK`. To track all the pixels, use :ocv:func:`calcOpticalFlowFarneback`.
@ -69,19 +69,19 @@ CalcOpticalFlowLK
Calculates the optical flow for two images using Lucas-Kanade algorithm.
:param winSize:Size of the averaging window used for grouping pixels
:param winSize:Size of the averaging window used for grouping pixels
:param velx:Horizontal component of the optical flow of the same size as input images, 32-bit floating-point, single-channel
:param velx:Horizontal component of the optical flow of the same size as input images, 32-bit floating-point, single-channel
:param vely:Vertical component of the optical flow of the same size as input images, 32-bit floating-point, single-channel
:param vely:Vertical component of the optical flow of the same size as input images, 32-bit floating-point, single-channel
The function computes the flow for every pixel of the first input image using the Lucas and Kanade algorithm [Lucas81]_. The function is obsolete. To track sparse features, use :ocv:func:`calcOpticalFlowPyrLK`. To track all the pixels, use :ocv:func:`calcOpticalFlowFarneback`.
@ -65,7 +65,7 @@ training examples are recomputed at each training iteration. Examples deleted at
CvBoostParams
-------------
..ocv:class:: CvBoostParams
..ocv:struct:: CvBoostParams : public CvDTreeParams
Boosting training parameters.
@ -82,13 +82,13 @@ The constructors.
..ocv:function:: CvBoostParams::CvBoostParams( int boost_type, int weak_count, double weight_trim_rate, int max_depth, bool use_surrogates, const float* priors )
:param boost_type:Type of the boosting algorithm. Possible values are:
* **CvBoost::DISCRETE** Discrete AdaBoost.
* **CvBoost::REAL** Real AdaBoost. It is a technique that utilizes confidence-rated predictions and works well with categorical data.
* **CvBoost::LOGIT** LogitBoost. It can produce good regression fits.
* **CvBoost::GENTLE** Gentle AdaBoost. It puts less weight on outlier data points and for that reason is often good with regression data.
* **CvBoost::GENTLE** Gentle AdaBoost. It puts less weight on outlier data points and for that reason is often good with regression data.
Gentle AdaBoost and Real AdaBoost are often the preferable choices.
Gentle AdaBoost and Real AdaBoost are often the preferable choices.
:param weak_count:The number of weak classifiers.
@ -122,7 +122,7 @@ Default parameters are:
CvBoostTree
-----------
..ocv:class:: CvBoostTree
..ocv:class:: CvBoostTree : public CvDTree
The weak tree classifier, a component of the boosted tree classifier :ocv:class:`CvBoost`, is a derivative of :ocv:class:`CvDTree`. Normally, there is no need to use the weak classifiers directly. However, they can be accessed as elements of the sequence :ocv:member:`CvBoost::weak`, retrieved by :ocv:func:`CvBoost::get_weak_predictors`.
@ -130,7 +130,7 @@ The weak tree classifier, a component of the boosted tree classifier :ocv:class:
CvBoost
-------
..ocv:class:: CvBoost
..ocv:class:: CvBoost : public CvStatModel
Boosted tree classifier derived from :ocv:class:`CvStatModel`.
@ -144,7 +144,7 @@ Default and training constructors.
The constructors follow conventions of :ocv:func:`CvStatModel::CvStatModel`. See :ocv:func:`CvStatModel::train` for parameters descriptions.
@ -181,10 +181,10 @@ Predicts a response for an input sample.
:param weak_responses:Optional output parameter, a floating-point vector with responses of each individual weak classifier. The number of elements in the vector must be equal to the slice length.
:param slice:Continuous subset of the sequence of weak classifiers to be used for prediction. By default, all the weak classifiers are used.
:param slice:Continuous subset of the sequence of weak classifiers to be used for prediction. By default, all the weak classifiers are used.
:param raw_mode:Normally, it should be set to ``false``.
:param return_sum:If ``true`` then return sum of votes instead of the class label.
The method runs the sample through the trees in the ensemble and returns the output class label based on the weighted voting.
@ -199,7 +199,7 @@ Removes the specified weak classifiers.
:param slice:Continuous subset of the sequence of weak classifiers to be removed.
The method removes the specified weak classifiers from the sequence.
The method removes the specified weak classifiers from the sequence.
..note:: Do not confuse this method with the pruning of individual decision trees, which is currently not supported.
@ -20,9 +20,9 @@ child node as the next observed node) or to the right based on the
value of a certain variable whose index is stored in the observed
node. The following variables are possible:
*
*
**Ordered variables.** The variable value is compared with a threshold that is also stored in the node. If the value is less than the threshold, the procedure goes to the left. Otherwise, it goes to the right. For example, if the weight is less than 1 kilogram, the procedure goes to the left, else to the right.
*
*
**Categorical variables.** A discrete variable value is tested to see whether it belongs to a certain subset of values (also stored in the node) from a limited set of values the variable could take. If it does, the procedure goes to the left. Otherwise, it goes to the right. For example, if the color is green or red, go to the left, else to the right.
So, in each node, a pair of entities (``variable_index`` , ``decision_rule
@ -57,7 +57,7 @@ Importance of each variable is computed over all the splits on this variable in
CvDTreeSplit
------------
..ocv:class:: CvDTreeSplit
..ocv:struct:: CvDTreeSplit
The structure represents a possible decision tree node split. It has public members:
@ -68,11 +68,11 @@ The structure represents a possible decision tree node split. It has public memb
..ocv:member:: int inversed
If it is not null then inverse split rule is used that is left and right branches are exchanged in the rule expressions below.
If it is not null then inverse split rule is used that is left and right branches are exchanged in the rule expressions below.
..ocv:member:: float quality
The split quality, a positive number. It is used to choose the best primary split, then to choose and sort the surrogate splits. After the tree is constructed, it is also used to compute variable importance.
The split quality, a positive number. It is used to choose the best primary split, then to choose and sort the surrogate splits. After the tree is constructed, it is also used to compute variable importance.
..ocv:member:: CvDTreeSplit* next
@ -82,16 +82,16 @@ The structure represents a possible decision tree node split. It has public memb
Bit array indicating the value subset in case of split on a categorical variable. The rule is: ::
if var_value in subset
then next_node <- left
if var_value in subset
then next_node <- left
else next_node <- right
..ocv:member:: float ord::c
..ocv:member:: float ord::c
The threshold value in case of split on an ordered variable. The rule is: ::
if var_value < ord.c
then next_node<-left
if var_value < ord.c
then next_node<-left
else next_node<-right
..ocv:member:: int ord::split_point
@ -100,12 +100,12 @@ The structure represents a possible decision tree node split. It has public memb
CvDTreeNode
-----------
..ocv:class:: CvDTreeNode
..ocv:struct:: CvDTreeNode
The structure represents a node in a decision tree. It has public members:
The structure represents a node in a decision tree. It has public members:
..ocv:member:: int class_idx
..ocv:member:: int class_idx
Class index normalized to 0..class_count-1 range and assigned to the node. It is used internally in classification trees and tree ensembles.
@ -135,17 +135,17 @@ The structure represents a node in a decision tree. It has public members:
..ocv:member:: int sample_count
The number of samples that fall into the node at the training stage. It is used to resolve the difficult cases - when the variable for the primary split is missing and all the variables for other surrogate splits are missing too. In this case the sample is directed to the left if ``left->sample_count > right->sample_count`` and to the right otherwise.
The number of samples that fall into the node at the training stage. It is used to resolve the difficult cases - when the variable for the primary split is missing and all the variables for other surrogate splits are missing too. In this case the sample is directed to the left if ``left->sample_count > right->sample_count`` and to the right otherwise.
..ocv:member:: int depth
Depth of the node. The root node depth is 0, the child nodes depth is the parent's depth + 1.
Depth of the node. The root node depth is 0, the child nodes depth is the parent's depth + 1.
Other numerous fields of ``CvDTreeNode`` are used internally at the training stage.
CvDTreeParams
-------------
..ocv:class:: CvDTreeParams
..ocv:struct:: CvDTreeParams
The structure contains all the decision tree training parameters. You can initialize it by default constructor and then override any parameters directly before training, or the structure may be fully initialized using the advanced variant of the constructor.
@ -153,19 +153,19 @@ CvDTreeParams::CvDTreeParams
----------------------------
The constructors.
..ocv:function:: CvDTreeParams::CvDTreeParams()
..ocv:function:: CvDTreeParams::CvDTreeParams()
..ocv:function:: CvDTreeParams::CvDTreeParams( int max_depth, int min_sample_count, float regression_accuracy, bool use_surrogates, int max_categories, int cv_folds, bool use_1se_rule, bool truncate_pruned_tree, const float* priors )
:param max_depth:The maximum possible depth of the tree. That is the training algorithms attempts to split a node while its depth is less than ``max_depth``. The actual depth may be smaller if the other termination criteria are met (see the outline of the training procedure in the beginning of the section), and/or if the tree is pruned.
:param max_depth:The maximum possible depth of the tree. That is the training algorithms attempts to split a node while its depth is less than ``max_depth``. The actual depth may be smaller if the other termination criteria are met (see the outline of the training procedure in the beginning of the section), and/or if the tree is pruned.
:param min_sample_count:If the number of samples in a node is less than this parameter then the node will not be split.
:param regression_accuracy:Termination criteria for regression trees. If all absolute differences between an estimated value in a node and values of train samples in this node are less than this parameter then the node will not be split.
:param use_surrogates:If true then surrogate splits will be built. These splits allow to work with missing data and compute variable importance correctly.
:param max_categories:Cluster possible values of a categorical variable into ``K`` :math:`\leq` ``max_categories`` clusters to find a suboptimal split. If a discrete variable, on which the training procedure tries to make a split, takes more than ``max_categories`` values, the precise best subset estimation may take a very long time because the algorithm is exponential. Instead, many decision trees engines (including ML) try to find sub-optimal split in this case by clustering all the samples into ``max_categories`` clusters that is some categories are merged together. The clustering is applied only in ``n``>2-class classification problems for categorical variables with ``N > max_categories`` possible values. In case of regression and 2-class classification the optimal split can be found efficiently without employing clustering, thus the parameter is not used in these cases.
:param max_categories:Cluster possible values of a categorical variable into ``K`` :math:`\leq` ``max_categories`` clusters to find a suboptimal split. If a discrete variable, on which the training procedure tries to make a split, takes more than ``max_categories`` values, the precise best subset estimation may take a very long time because the algorithm is exponential. Instead, many decision trees engines (including ML) try to find sub-optimal split in this case by clustering all the samples into ``max_categories`` clusters that is some categories are merged together. The clustering is applied only in ``n``>2-class classification problems for categorical variables with ``N > max_categories`` possible values. In case of regression and 2-class classification the optimal split can be found efficiently without employing clustering, thus the parameter is not used in these cases.
:param cv_folds:If ``cv_folds > 1`` then prune a tree with ``K``-fold cross-validation where ``K`` is equal to ``cv_folds``.
@ -184,10 +184,10 @@ The default constructor initializes all the parameters with the default values t
Decision tree training data and shared data for tree ensembles. The structure is mostly used internally for storing both standalone trees and tree ensembles efficiently. Basically, it contains the following types of information:
@ -212,7 +212,7 @@ There are two ways of using this structure. In simple cases (for example, a stan
CvDTree
-------
..ocv:class:: CvDTree
..ocv:class:: CvDTree : public CvStatModel
The class implements a decision tree as described in the beginning of this section.
:param preprocessedInput:This parameter is normally set to ``false``, implying a regular input. If it is ``true``, the method assumes that all the values of the discrete input variables have been already normalized to :math:`0` to :math:`num\_of\_categories_i-1` ranges since the decision tree uses such normalized representation internally. It is useful for faster prediction with tree ensembles. For ordered input variables, the flag is not used.
The method traverses the decision tree and returns the reached leaf node as output. The prediction result, either the class label or the estimated function value, may be retrieved as the ``value`` field of the :ocv:class:`CvDTreeNode` structure, for example: ``dtree->predict(sample,mask)->value``.
@ -270,7 +270,7 @@ Returns error of the decision tree.
@ -10,6 +10,6 @@ Extremely randomized trees have been introduced by Pierre Geurts, Damien Ernst a
CvERTrees
----------
..ocv:class:: CvERTrees
..ocv:class:: CvERTrees : public CvRTrees
The class implements the Extremely randomized trees algorithm. ``CvERTrees`` is inherited from :ocv:class:`CvRTrees` and has the same interface, so see description of :ocv:class:`CvRTrees` class to get details. To set the training parameters of Extremely randomized trees the same class :ocv:class:`CvRTParams` is used.
:param nclusters:The number of mixture components in the Gaussian mixture model. Default value of the parameter is ``EM::DEFAULT_NCLUSTERS=5``. Some of EM implementation could determine the optimal number of mixtures within a specified value range, but that is not the case in ML yet.
:param covMatType:Constraint on covariance matrices which defines type of matrices. Possible values are:
* **EM::COV_MAT_SPHERICAL** A scaled identity matrix :math:`\mu_k * I`. There is the only parameter :math:`\mu_k` to be estimated for each matrix. The option may be used in special cases, when the constraint is relevant, or as a first step in the optimization (for example in case when the data is preprocessed with PCA). The results of such preliminary estimation may be passed again to the optimization procedure, this time with ``covMatType=EM::COV_MAT_DIAGONAL``.
@ -112,7 +113,7 @@ The constructor of the class
* **EM::COV_MAT_DIAGONAL** A diagonal matrix with positive diagonal elements. The number of free parameters is ``d`` for each matrix. This is most commonly used option yielding good estimation results.
* **EM::COV_MAT_GENERIC** A symmetric positively defined matrix. The number of free parameters in each matrix is about :math:`d^2/2`. It is not recommended to use this option, unless there is pretty accurate initial estimation of the parameters and/or a huge number of training samples.
:param termCrit:The termination criteria of the EM algorithm. The EM algorithm can be terminated by the number of iterations ``termCrit.maxCount`` (number of M-steps) or when relative change of likelihood logarithm is less than ``termCrit.epsilon``. Default maximum number of iterations is ``EM::DEFAULT_MAX_ITERS=100``.
EM::train
@ -122,23 +123,29 @@ Estimates the Gaussian mixture parameters from a samples set.
:param samples:Samples from which the Gaussian mixture model will be estimated. It should be a one-channel matrix, each row of which is a sample. If the matrix does not have ``CV_64F`` type it will be converted to the inner matrix of such type for the further computing.
:param means0:Initial means :math:`a_k` of mixture components. It is a one-channel matrix of :math:`nclusters \times dims` size. If the matrix does not have ``CV_64F`` type it will be converted to the inner matrix of such type for the further computing.
:param means0:Initial means :math:`a_k` of mixture components. It is a one-channel matrix of :math:`nclusters \times dims` size. If the matrix does not have ``CV_64F`` type it will be converted to the inner matrix of such type for the further computing.
:param covs0:The vector of initial covariance matrices :math:`S_k` of mixture components. Each of covariance matrices is a one-channel matrix of :math:`dims \times dims` size. If the matrices do not have ``CV_64F`` type they will be converted to the inner matrices of such type for the further computing.
:param weights0:Initial weights :math:`\pi_k` of mixture components. It should be a one-channel floating-point matrix with :math:`1 \times nclusters` or :math:`nclusters \times 1` size.
:param probs0:Initial probabilities :math:`p_{i,k}` of sample :math:`i` to belong to mixture component :math:`k`. It is a one-channel floating-point matrix of :math:`nsamples \times nclusters` size.
:param weights0:Initial weights :math:`\pi_k` of mixture components. It should be a one-channel floating-point matrix with :math:`1 \times nclusters` or :math:`nclusters \times 1` size.
:param probs0:Initial probabilities :math:`p_{i,k}` of sample :math:`i` to belong to mixture component :math:`k`. It is a one-channel floating-point matrix of :math:`nsamples \times nclusters` size.
:param logLikelihoods:The optional output matrix that contains a likelihood logarithm value for each sample. It has :math:`nsamples \times 1` size and ``CV_64FC1`` type.
:param labels:The optional output "class label" for each sample: :math:`\texttt{labels}_i=\texttt{arg max}_k(p_{i,k}), i=1..N` (indices of the most probable mixture component for each sample). It has :math:`nsamples \times 1` size and ``CV_32SC1`` type.
:param probs:The optional output matrix that contains posterior probabilities of each Gaussian mixture component given the each sample. It has :math:`nsamples \times nclusters` size and ``CV_64FC1`` type.
Three versions of training method differ in the initialization of Gaussian mixture model parameters and start step:
@ -167,7 +174,9 @@ EM::predict
Returns a likelihood logarithm value and an index of the most probable mixture component for the given sample.
:param sample:A sample for classification. It should be a one-channel matrix of :math:`1 \times dims` or :math:`dims \times 1` size.
:param probs:Optional output matrix that contains posterior probabilities of each component given the sample. It has :math:`1 \times nclusters` size and ``CV_64FC1`` type.
@ -180,6 +189,8 @@ Returns ``true`` if the Gaussian mixture model was trained.
..ocv:function:: bool EM::isTrained() const
..ocv:pyfunction:: cv2.EM.isTrained() -> retval
EM::read, EM::write
-------------------
See :ocv:func:`Algorithm::read` and :ocv:func:`Algorithm::write`.
@ -8,7 +8,7 @@ ML implements feed-forward artificial neural networks or, more particularly, mul
..image:: pics/mlp.png
All the neurons in MLP are similar. Each of them has several input links (it takes the output values from several neurons in the previous layer as input) and several output links (it passes the response to several neurons in the next layer). The values retrieved from the previous layer are summed up with certain weights, individual for each neuron, plus the bias term. The sum is transformed using the activation function
:math:`f` that may be also different for different neurons.
:math:`f` that may be also different for different neurons.
..image:: pics/neuron_model.png
@ -45,7 +45,7 @@ Different activation functions may be used. ML implements three standard functio
In ML, all the neurons have the same activation functions, with the same free parameters (
:math:`\alpha, \beta` ) that are specified by user and are not altered by the training algorithms.
So, the whole trained network works as follows:
So, the whole trained network works as follows:
#. Take the feature vector as input. The vector size is equal to the size of the input layer.
@ -93,11 +93,11 @@ The second (default) one is a batch RPROP algorithm.
..[LeCun98] Y. LeCun, L. Bottou, G.B. Orr and K.-R. Muller, *Efficient backprop*, in Neural Networks---Tricks of the Trade, Springer Lecture Notes in Computer Sciences 1524, pp.5-50, 1998.
..[RPROP93] M. Riedmiller and H. Braun, *A Direct Adaptive Method for Faster Backpropagation Learning: The RPROP Algorithm*, Proc. ICNN, San Francisco (1993).
CvANN_MLP_TrainParams
---------------------
..ocv:class:: CvANN_MLP_TrainParams
..ocv:struct:: CvANN_MLP_TrainParams
Parameters of the MLP training algorithm. You can initialize the structure by a constructor or the individual parameters can be adjusted after the structure is created.
@ -169,9 +169,9 @@ By default the RPROP algorithm is used:
CvANN_MLP
---------
..ocv:class:: CvANN_MLP
..ocv:class:: CvANN_MLP : public CvStatModel
MLP model.
MLP model.
Unlike many other models in ML that are constructed and trained at once, in the MLP model these steps are separated. First, a network with the specified topology is created using the non-default constructor or the method :ocv:func:`CvANN_MLP::create`. All the weights are set to zeros. Then, the network is trained using a set of input and output vectors. The training procedure can be repeated more than once, that is, the weights can be adjusted based on the new training data.
The advanced constructor allows to create MLP with the specified topology. See :ocv:func:`CvANN_MLP::create` for details.
@ -203,7 +203,7 @@ Constructs MLP with the specified topology.
:param activateFunc:Parameter specifying the activation function for each neuron: one of ``CvANN_MLP::IDENTITY``, ``CvANN_MLP::SIGMOID_SYM``, and ``CvANN_MLP::GAUSSIAN``.
:param fparam1:Free parameter of the activation function, :math:`\alpha`. See the formulas in the introduction section.
:param fparam2:Free parameter of the activation function, :math:`\beta`. See the formulas in the introduction section.
The method creates an MLP network with the specified topology and assigns the same activation function to all the neurons.
The constructors follow conventions of :ocv:func:`CvStatModel::CvStatModel`. See :ocv:func:`CvStatModel::train` for parameters descriptions.
@ -41,7 +41,7 @@ Trains the model.
:param update:Identifies whether the model should be trained from scratch (``update=false``) or should be updated using the new training data (``update=true``).
The method trains the Normal Bayes classifier. It follows the conventions of the generic :ocv:func:`CvStatModel::train` approach with the following limitations:
The method trains the Normal Bayes classifier. It follows the conventions of the generic :ocv:func:`CvStatModel::train` approach with the following limitations:
* Only ``CV_ROW_SAMPLE`` data layout is supported.
@ -42,7 +42,7 @@ For the random trees usage example, please, see letter_recog.cpp sample in OpenC
CvRTParams
----------
..ocv:class:: CvRTParams
..ocv:struct:: CvRTParams : public CvDTreeParams
Training parameters of random trees.
@ -53,7 +53,7 @@ CvRTParams::CvRTParams:
-----------------------
The constructors.
..ocv:function:: CvRTParams::CvRTParams()
..ocv:function:: CvRTParams::CvRTParams()
..ocv:function:: CvRTParams::CvRTParams( int max_depth, int min_sample_count, float regression_accuracy, bool use_surrogates, int max_categories, const float* priors, bool calc_var_importance, int nactive_vars, int max_num_of_trees_in_the_forest, float forest_accuracy, int termcrit_type )
The method returns the variable importance vector, computed at the training stage when ``CvRTParams::calc_var_importance`` is set to true. If this flag was set to false, the ``NULL`` pointer is returned. This differs from the decision trees where variable importance can be computed anytime after the training.
@ -181,7 +181,7 @@ CvRTrees::calc_error
--------------------
Returns error of the random forest.
..ocv:function:: float CvRTrees::calc_error( CvMLData* data, int type, std::vector<float>*resp=0 )
..ocv:function:: float CvRTrees::calc_error( CvMLData* data, int type, std::vector<float>*resp=0 )
The method is identical to :ocv:func:`CvDTree::calc_error` but uses the random forest as predictor.
Most ML classes provide a single-step constructor and train constructors. This constructor is equivalent to the default constructor, followed by the :ocv:func:`CvStatModel::train` method with the parameters that are passed to the constructor.
@ -16,9 +16,9 @@ SVM implementation in OpenCV is based on [LibSVM]_.
CvParamGrid
-----------
..ocv:class:: CvParamGrid
..ocv:struct:: CvParamGrid
The structure represents the logarithmic grid range of statmodel parameters. It is used for optimizing statmodel accuracy by varying model parameters, the accuracy estimate being computed by cross-validation.
The structure represents the logarithmic grid range of statmodel parameters. It is used for optimizing statmodel accuracy by varying model parameters, the accuracy estimate being computed by cross-validation.
..ocv:member:: double CvParamGrid::min_val
@ -77,7 +77,7 @@ Returns ``true`` if the grid is valid and ``false`` otherwise. The grid is valid
CvSVMParams
-----------
..ocv:class:: CvSVMParams
..ocv:struct:: CvSVMParams
SVM training parameters.
@ -114,7 +114,7 @@ The constructors.
* **CvSVM::RBF** Radial basis function (RBF), a good choice in most cases. :math:`K(x_i, x_j) = e^{-\gamma ||x_i - x_j||^2}, \gamma > 0`.
:param k_fold:Cross-validation parameter. The training set is divided into ``k_fold`` subsets. One subset is used to train the model, the others form the test set. So, the SVM algorithm is executed ``k_fold`` times.
:param \*Grid:Iteration grid for the corresponding SVM parameter.
:param balanced:If ``true`` and the problem is 2-class classification then the method creates more balanced cross-validation subsets that is proportions between classes in subsets are close to such proportion in the whole train dataset.
@ -285,7 +285,7 @@ Retrieves a number of support vectors and the particular vector.
Class for extracting keypoints and computing descriptors using the Scale Invariant Feature Transform (SIFT) algorithm by D. Lowe [Lowe04]_.
@ -17,11 +17,11 @@ The SIFT constructors.
..ocv:function:: SIFT::SIFT( int nfeatures=0, int nOctaveLayers=3, double contrastThreshold=0.04, double edgeThreshold=10, double sigma=1.6)
:param nfeatures:The number of best features to retain. The features are ranked by their scores (measured in SIFT algorithm as the local contrast)
:param nOctaveLayers:The number of layers in each octave. 3 is the value used in D. Lowe paper. The number of octaves is computed automatically from the image resolution.
:param contrastThreshold:The contrast threshold used to filter out weak features in semi-uniform (low-contrast) regions. The larger the threshold, the less features are produced by the detector.
:param edgeThreshold:The threshold used to filter out edge-like features. Note that the its meaning is different from the contrastThreshold, i.e. the larger the ``edgeThreshold``, the less features are filtered out (more features are retained).
:param sigma:The sigma of the Gaussian applied to the input image at the octave #0. If your image is captured with a weak camera with soft lenses, you might want to reduce the number.
@ -31,7 +31,7 @@ SIFT::operator ()
-----------------
Extract features and computes their descriptors using SIFT algorithm
@ -43,33 +43,33 @@ Extract features and computes their descriptors using SIFT algorithm
:param useProvidedKeypoints:Boolean flag. If it is true, the keypoint detector is not run. Instead, the provided vector of keypoints is used and the algorithm just computes their descriptors.
SURF
----
..ocv:class:: SURF
..ocv:class:: SURF : public Feature2D
Class for extracting Speeded Up Robust Features from an image [Bay06]_. The class is derived from ``CvSURFParams`` structure, which specifies the algorithm parameters:
..ocv:member:: int extended
* 0 means that the basic descriptors (64 elements each) shall be computed
* 1 means that the extended descriptors (128 elements each) shall be computed
..ocv:member:: int upright
* 0 means that detector computes orientation of each feature.
* 1 means that the orientation is not computed (which is much, much faster). For example, if you match images from a stereo pair, or do image stitching, the matched features likely have very similar angles, and you can speed up feature extraction by setting ``upright=1``.
..ocv:member:: double hessianThreshold
Threshold for the keypoint detector. Only features, whose hessian is larger than ``hessianThreshold`` are retained by the detector. Therefore, the larger the value, the less keypoints you will get. A good default value could be from 300 to 500, depending from the image contrast.
..ocv:member:: int nOctaves
The number of a gaussian pyramid octaves that the detector uses. It is set to 4 by default. If you want to get very large features, use the larger value. If you want just small features, decrease it.
..ocv:member:: int nOctaveLayers
The number of images within each octave of a gaussian pyramid. It is set to 2 by default.
@ -82,18 +82,18 @@ The SURF extractor constructors.
..ocv:function:: SURF::SURF()
..ocv:function:: SURF::SURF(double hessianThreshold, int nOctaves=4, int nOctaveLayers=2, bool extended=false, bool upright=false)
..ocv:function:: SURF::SURF(double hessianThreshold, int nOctaves=4, int nOctaveLayers=2, bool extended=true, bool upright=false )
:param mask:Optional input mask that marks the regions where we should detect features.
:param keypoints:The input/output vector of keypoints
:param descriptors:The output matrix of descriptors. Pass ``cv::noArray()`` if you do not need them.
:param useProvidedKeypoints:Boolean flag. If it is true, the keypoint detector is not run. Instead, the provided vector of keypoints is used and the algorithm just computes their descriptors.
:param storage:Memory storage for the output keypoints and descriptors in OpenCV 1.x API.
:param params:SURF algorithm parameters in OpenCV 1.x API.
The function is parallelized with the TBB library.
:param cascade:Haar classifier cascade (OpenCV 1.x API only). It can be loaded from XML or YAML file using :ocv:cfunc:`Load`. When the cascade is not needed anymore, release it using ``cvReleaseHaarClassifierCascade(&cascade)``.
:param detector:LatentSVM detector in internal representation
:param storage:Memory storage to store the resultant sequence of the object candidate rectangles
:param overlap_threshold:Threshold for the non-maximum suppression algorithm
:param numThreads:Number of threads used in parallel version of the algorithm
..highlight:: cpp
LatentSvmDetector
-----------------
..ocv:class:: LatentSvmDetector
This is a C++ wrapping class of Latent SVM. It contains internal representation of several
trained Latent SVM detectors (models) and a set of methods to load the detectors and detect objects
This is a C++ wrapping class of Latent SVM. It contains internal representation of several
trained Latent SVM detectors (models) and a set of methods to load the detectors and detect objects
using them.
LatentSvmDetector::ObjectDetection
----------------------------------
..ocv:class:: LatentSvmDetector::ObjectDetection
..ocv:struct:: LatentSvmDetector::ObjectDetection
Structure contains the detection information.
..ocv:member:: Rect rect
bounding box for a detected object
..ocv:member:: float score
confidence level
..ocv:member:: int classID
class (model or detector) ID that detect an object
class (model or detector) ID that detect an object
LatentSvmDetector::LatentSvmDetector
------------------------------------
Two types of constructors.
@ -208,8 +208,8 @@ Two types of constructors.
:param filenames:A set of filenames storing the trained detectors (models). Each file contains one model. See examples of such files here /opencv_extra/testdata/cv/latentsvmdetector/models_VOC2007/.
:param filenames:A set of filenames storing the trained detectors (models). Each file contains one model. See examples of such files here /opencv_extra/testdata/cv/latentsvmdetector/models_VOC2007/.
:param classNames:A set of trained models names. If it's empty then the name of each model will be constructed from the name of file containing the model. E.g. the model stored in "/home/user/cat.xml" will get the name "cat".
LatentSvmDetector::~LatentSvmDetector
@ -228,10 +228,10 @@ LatentSvmDetector::load
-----------------------
Load the trained models from given ``.xml`` files and return ``true`` if at least one model was loaded.
:param filenames:A set of filenames storing the trained detectors (models). Each file contains one model. See examples of such files here /opencv_extra/testdata/cv/latentsvmdetector/models_VOC2007/.
:param filenames:A set of filenames storing the trained detectors (models). Each file contains one model. See examples of such files here /opencv_extra/testdata/cv/latentsvmdetector/models_VOC2007/.
:param classNames:A set of trained models names. If it's empty then the name of each model will be constructed from the name of file containing the model. E.g. the model stored in "/home/user/cat.xml" will get the name "cat".
LatentSvmDetector::detect
@ -239,13 +239,13 @@ LatentSvmDetector::detect
Find rectangular regions in the given image that are likely to contain objects of loaded classes (models)
..[Felzenszwalb2010] Felzenszwalb, P. F. and Girshick, R. B. and McAllester, D. and Ramanan, D. *Object Detection with Discriminatively Trained Part Based Models*. PAMI, vol. 32, no. 9, pp. 1627-1645, September 2010
..[Felzenszwalb2010] Felzenszwalb, P. F. and Girshick, R. B. and McAllester, D. and Ramanan, D. *Object Detection with Discriminatively Trained Part Based Models*. PAMI, vol. 32, no. 9, pp. 1627-1645, September 2010
..ocv:class:: detail::BestOf2NearestMatcher : public FeaturesMatcher
Features matcher which finds two best matches for each feature and leaves the best one only if the ratio between descriptor distances is greater than the threshold ``match_conf``. ::
class CV_EXPORTS BestOf2NearestMatcher : public FeaturesMatcher
@ -55,7 +55,7 @@ This method must implement camera parameters estimation logic in order to make t
detail::HomographyBasedEstimator
--------------------------------
..ocv:class:: detail::HomographyBasedEstimator
..ocv:class:: detail::HomographyBasedEstimator : public Estimator
Homography based rotation estimator. ::
@ -71,7 +71,7 @@ Homography based rotation estimator. ::
detail::BundleAdjusterBase
--------------------------
..ocv:class:: detail::BundleAdjusterBase
..ocv:class:: detail::BundleAdjusterBase : public Estimator
Base class for all camera parameters refinement methods. ::
@ -187,7 +187,7 @@ Gets the refined camera parameters.
detail::BundleAdjusterReproj
----------------------------
..ocv:class:: detail::BundleAdjusterReproj
..ocv:class:: detail::BundleAdjusterReproj : public BundleAdjusterBase
Implementation of the camera parameters refinement algorithm which minimizes sum of the reprojection error squares. ::
@ -204,7 +204,7 @@ Implementation of the camera parameters refinement algorithm which minimizes sum
detail::BundleAdjusterRay
-------------------------
..ocv:class:: detail::BundleAdjusterRay
..ocv:class:: detail::BundleAdjusterRay : public BundleAdjusterBase
Implementation of the camera parameters refinement algorithm which minimizes sum of the distances between the rays passing through the camera center and a feature. ::
:param prevImg:First 8-bit input image or pyramid constructed by :ocv:func:`buildOpticalFlowPyramid`.
@ -32,14 +32,14 @@ Calculates an optical flow for a sparse feature set using the iterative Lucas-Ka
:param maxLevel:0-based maximal pyramid level number. If set to 0, pyramids are not used (single level). If set to 1, two levels are used, and so on. If pyramids are passed to input then algorithm will use as many levels as pyramids have but no more than ``maxLevel``.
:param criteria:Parameter specifying the termination criteria of the iterative search algorithm (after the specified maximum number of iterations ``criteria.maxCount`` or when the search window moves by less than ``criteria.epsilon`` .
:param flags:Operation flags:
* **OPTFLOW_USE_INITIAL_FLOW** Use initial estimations stored in ``nextPts`` . If the flag is not set, then ``prevPts`` is copied to ``nextPts`` and is considered as the initial estimate.
* **OPTFLOW_LK_GET_MIN_EIGENVALS** Use minimum eigen values as a error measure (see ``minEigThreshold`` description). If the flag is not set, then L1 distance between patches around the original and a moved point divided by number of pixels in a window is used as a error measure.
:param minEigThreshold:The algorithm computes a minimum eigen value of a 2x2 normal matrix of optical flow equations (this matrix is called a spatial gradient matrix in [Bouguet00]_) divided by number of pixels in a window. If this value is less then ``minEigThreshold`` then a corresponding feature is filtered out and its flow is not computed. So it allows to remove bad points earlier and speed up the computation.
The function implements a sparse iterative version of the Lucas-Kanade optical flow in pyramids. See [Bouguet00]_. The function is parallelized with the TBB library.
buildOpticalFlowPyramid
@ -73,11 +73,11 @@ calcOpticalFlowFarneback
----------------------------
Computes a dense optical flow using the Gunnar Farneback's algorithm.
..ocv:function:: void calcOpticalFlowFarneback( InputArray prevImg, InputArray nextImg, InputOutputArray flow, double pyrScale, int levels, int winsize, int iterations, int polyN, double polySigma, int flags )
..ocv:function:: void calcOpticalFlowFarneback( InputArray prev, InputArray next, InputOutputArray flow, double pyr_scale, int levels, int winsize, int iterations, int poly_n, double poly_sigma, int flags )
..ocv:cfunction:: void cvCalcOpticalFlowFarneback( const CvArr* prevImg, const CvArr* nextImg, CvArr* flow, double pyrScale, int levels, int winsize, int iterations, int polyN, double polySigma, int flags )
..ocv:cfunction:: void cvCalcOpticalFlowFarneback( const CvArr* prev, const CvArr* next, CvArr* flow, double pyr_scale, int levels, int winsize, int iterations, int poly_n, double poly_sigma, int flags )
@ -96,7 +96,7 @@ Computes a dense optical flow using the Gunnar Farneback's algorithm.
:param polyN:Size of the pixel neighborhood used to find polynomial expansion in each pixel. Larger values mean that the image will be approximated with smoother surfaces, yielding more robust algorithm and more blurred motion field. Typically, ``polyN`` =5 or 7.
:param polySigma:Standard deviation of the Gaussian that is used to smooth derivatives used as a basis for the polynomial expansion. For ``polyN=5`` , you can set ``polySigma=1.1`` . For ``polyN=7`` , a good value would be ``polySigma=1.5`` .
:param flags:Operation flags that can be a combination of the following:
* **OPTFLOW_USE_INITIAL_FLOW** Use the input ``flow`` as an initial flow approximation.
@ -130,7 +130,7 @@ The function finds an optimal affine transform *[A|b]* (a ``2 x 3`` floating-poi
Two point sets
*
Two raster images. In this case, the function first finds some features in the ``src`` image and finds the corresponding features in ``dst`` image. After that, the problem is reduced to the first case.
In case of point sets, the problem is formulated as follows: you need to find a 2x2 matrix *A* and 2x1 vector *b* so that:
..math::
@ -138,7 +138,7 @@ In case of point sets, the problem is formulated as follows: you need to find a
[A^*|b^*] = arg \min _{[A|b]} \sum _i \| \texttt{dst}[i] - A { \texttt{src}[i]}^T - b \| ^2
where ``src[i]`` and ``dst[i]`` are the i-th points in ``src`` and ``dst``, respectively
:math:`[A|b]` can be either arbitrary (when ``fullAffine=true`` ) or have a form of
..math::
@ -197,7 +197,7 @@ Calculates a gradient orientation of a motion history image.
:param mhi:Motion history single-channel floating-point image.
@ -207,7 +207,7 @@ Calculates a gradient orientation of a motion history image.
:param orientation:Output motion gradient orientation image that has the same type and the same size as ``mhi`` . Each pixel of the image is a motion orientation, from 0 to 360 degrees.
:param delta1:Minimal (or maximal) allowed difference between ``mhi`` values within a pixel neighborhood.
:param delta2:Maximal (or minimal) allowed difference between ``mhi`` values within a pixel neighborhood. That is, the function finds the minimum ( :math:`m(x,y)` ) and maximum ( :math:`M(x,y)` ) ``mhi`` values over :math:`3 \times 3` neighborhood of each pixel and marks the motion orientation at :math:`(x, y)` as valid only if
..math::
@ -241,13 +241,13 @@ Calculates a global motion orientation in a selected region.
:param orientation:Motion gradient orientation image calculated by the function :ocv:func:`calcMotionGradient` .
:param mask:Mask image. It may be a conjunction of a valid gradient mask, also calculated by :ocv:func:`calcMotionGradient` , and the mask of a region whose direction needs to be calculated.
:param mhi:Motion history image calculated by :ocv:func:`updateMotionHistory` .
:param timestamp:Timestamp passed to :ocv:func:`updateMotionHistory` .
:param duration:Maximum duration of a motion track in milliseconds, passed to :ocv:func:`updateMotionHistory` .
The function calculates an average
@ -267,8 +267,8 @@ Splits a motion history image into a few parts corresponding to separate indepen
@ -279,7 +279,7 @@ Splits a motion history image into a few parts corresponding to separate indepen
:param timestamp:Current time in milliseconds or other units.
:param segThresh:Segmentation threshold that is recommended to be equal to the interval between motion history "steps" or greater.
The function finds all of the motion segments and marks them in ``segmask`` with individual values (1,2,...). It also computes a vector with ROIs of motion connected components. After that the motion direction for every component can be calculated with :ocv:func:`calcGlobalOrientation` using the extracted mask of the particular component.
@ -294,17 +294,17 @@ Finds an object center, size, and orientation.
Here are important members of the class that control the algorithm, which you can set after constructing the class instance:
..ocv:member:: int nmixtures
Maximum allowed number of mixture components. Actual number is determined dynamically per pixel.
..ocv:member:: float backgroundRatio
Threshold defining whether the component is significant enough to be included into the background model ( corresponds to ``TB=1-cf`` from the paper??which paper??). ``cf=0.1 => TB=0.9`` is default. For ``alpha=0.001``, it means that the mode should exist for approximately 105 frames before it is considered foreground.
..ocv:member:: float varThresholdGen
Threshold for the squared Mahalanobis distance that helps decide when a sample is close to the existing components (corresponds to ``Tg``). If it is not close to any component, a new component is generated. ``3 sigma => Tg=3*3=9`` is default. A smaller ``Tg`` value generates more components. A higher ``Tg`` value may result in a small number of components but they can grow too large.
..ocv:member:: float fVarInit
Initial variance for the newly generated components. It affects the speed of adaptation. The parameter value is based on your estimate of the typical standard deviation from the images. OpenCV uses 15 as a reasonable value.
..ocv:member:: float fVarMin
..ocv:member:: float fVarMin
Parameter used to further control the variance.
..ocv:member:: float fVarMax
Parameter used to further control the variance.
..ocv:member:: float fCT
Complexity reduction parameter. This parameter defines the number of samples needed to accept to prove the component exists. ``CT=0.05`` is a default value for all the samples. By setting ``CT=0`` you get an algorithm very similar to the standard Stauffer&Grimson algorithm.
..ocv:member:: uchar nShadowDetection
The value for marking shadow pixels in the output foreground mask. Default value is 127.
..ocv:member:: float fTau
Shadow threshold. The shadow is detected if the pixel is a darker version of the background. ``Tau`` is a threshold defining how much darker the shadow can be. ``Tau= 0.5`` means that if a pixel is more than twice darker then it is not shadow. See Prati,Mikic,Trivedi,Cucchiarra, *Detecting Moving Shadows...*, IEEE PAMI,2003.
@ -605,7 +605,7 @@ See :ocv:func:`BackgroundSubtractor::getBackgroundImage`.
..[Davis97] Davis, J.W. and Bobick, A.F. “The Representation and Recognition of Action Using Temporal Templates”, CVPR97, 1997
..[Farneback2003] Gunnar Farneback, Two-frame motion estimation based on polynomial expansion, Lecture Notes in Computer Science, 2003, (2749), , 363-370.
..[Farneback2003] Gunnar Farneback, Two-frame motion estimation based on polynomial expansion, Lecture Notes in Computer Science, 2003, (2749), , 363-370.
..[Horn81] Berthold K.P. Horn and Brian G. Schunck. Determining Optical Flow. Artificial Intelligence, 17, pp. 185-203, 1981.