Merge pull request #18969 from OrestChura:oc/fix_notes_returns

[G-API] Multiple return/note fix

* Fix doxygen:
 - multiple return
 - multiple notes

* Addressing comments
 - divide description of split(merge)3/4
pull/19066/head
Orest Chura 4 years ago committed by GitHub
parent 619cc01ca1
commit f41327df0c
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
  1. 76
      modules/gapi/include/opencv2/gapi/core.hpp
  2. 129
      modules/gapi/include/opencv2/gapi/imgproc.hpp
  3. 12
      modules/gapi/include/opencv2/gapi/video.hpp

@ -1508,41 +1508,77 @@ Output image size will have the size dsize, the depth of output is the same as o
*/ */
GAPI_EXPORTS GMatP resizeP(const GMatP& src, const Size& dsize, int interpolation = cv::INTER_LINEAR); GAPI_EXPORTS GMatP resizeP(const GMatP& src, const Size& dsize, int interpolation = cv::INTER_LINEAR);
/** @brief Creates one 3-channel (4-channel) matrix out of 3(4) single-channel ones. /** @brief Creates one 4-channel matrix out of 4 single-channel ones.
The function merges several matrices to make a single multi-channel matrix. That is, each The function merges several matrices to make a single multi-channel matrix. That is, each
element of the output matrix will be a concatenation of the elements of the input matrices, where element of the output matrix will be a concatenation of the elements of the input matrices, where
elements of i-th input matrix are treated as mv[i].channels()-element vectors. elements of i-th input matrix are treated as mv[i].channels()-element vectors.
Input matrix must be of @ref CV_8UC3 (@ref CV_8UC4) type. Output matrix must be of @ref CV_8UC4 type.
The function split3/split4 does the reverse operation. The function split4 does the reverse operation.
@note Function textual ID for merge3 is "org.opencv.core.transform.merge3" @note
@note Function textual ID for merge4 is "org.opencv.core.transform.merge4" - Function textual ID is "org.opencv.core.transform.merge4"
@param src1 first input matrix to be merged @param src1 first input @ref CV_8UC1 matrix to be merged.
@param src2 second input matrix to be merged @param src2 second input @ref CV_8UC1 matrix to be merged.
@param src3 third input matrix to be merged @param src3 third input @ref CV_8UC1 matrix to be merged.
@param src4 fourth input matrix to be merged @param src4 fourth input @ref CV_8UC1 matrix to be merged.
@sa split4, split3 @sa merge3, split4, split3
*/ */
GAPI_EXPORTS GMat merge4(const GMat& src1, const GMat& src2, const GMat& src3, const GMat& src4); GAPI_EXPORTS GMat merge4(const GMat& src1, const GMat& src2, const GMat& src3, const GMat& src4);
/** @brief Creates one 3-channel matrix out of 3 single-channel ones.
The function merges several matrices to make a single multi-channel matrix. That is, each
element of the output matrix will be a concatenation of the elements of the input matrices, where
elements of i-th input matrix are treated as mv[i].channels()-element vectors.
Output matrix must be of @ref CV_8UC3 type.
The function split3 does the reverse operation.
@note
- Function textual ID is "org.opencv.core.transform.merge3"
@param src1 first input @ref CV_8UC1 matrix to be merged.
@param src2 second input @ref CV_8UC1 matrix to be merged.
@param src3 third input @ref CV_8UC1 matrix to be merged.
@sa merge4, split4, split3
*/
GAPI_EXPORTS GMat merge3(const GMat& src1, const GMat& src2, const GMat& src3); GAPI_EXPORTS GMat merge3(const GMat& src1, const GMat& src2, const GMat& src3);
/** @brief Divides a 3-channel (4-channel) matrix into 3(4) single-channel matrices. /** @brief Divides a 4-channel matrix into 4 single-channel matrices.
The function splits a 3-channel (4-channel) matrix into 3(4) single-channel matrices: The function splits a 4-channel matrix into 4 single-channel matrices:
\f[\texttt{mv} [c](I) = \texttt{src} (I)_c\f] \f[\texttt{mv} [c](I) = \texttt{src} (I)_c\f]
All output matrices must be in @ref CV_8UC1. All output matrices must be of @ref CV_8UC1 type.
The function merge4 does the reverse operation.
@note Function textual for split3 ID is "org.opencv.core.transform.split3" @note
@note Function textual for split4 ID is "org.opencv.core.transform.split4" - Function textual ID is "org.opencv.core.transform.split4"
@param src input @ref CV_8UC4 (@ref CV_8UC3) matrix. @param src input @ref CV_8UC4 matrix.
@sa merge3, merge4 @sa split3, merge3, merge4
*/ */
GAPI_EXPORTS std::tuple<GMat, GMat, GMat,GMat> split4(const GMat& src); GAPI_EXPORTS std::tuple<GMat, GMat, GMat,GMat> split4(const GMat& src);
/** @brief Divides a 3-channel matrix into 3 single-channel matrices.
The function splits a 3-channel matrix into 3 single-channel matrices:
\f[\texttt{mv} [c](I) = \texttt{src} (I)_c\f]
All output matrices must be of @ref CV_8UC1 type.
The function merge3 does the reverse operation.
@note
- Function textual ID is "org.opencv.core.transform.split3"
@param src input @ref CV_8UC3 matrix.
@sa split4, merge3, merge4
*/
GAPI_EXPORTS_W std::tuple<GMat, GMat, GMat> split3(const GMat& src); GAPI_EXPORTS_W std::tuple<GMat, GMat, GMat> split3(const GMat& src);
/** @brief Applies a generic geometrical transformation to an image. /** @brief Applies a generic geometrical transformation to an image.
@ -1560,7 +1596,9 @@ convert from floating to fixed-point representations of a map is that they can y
cvFloor(y)) and \f$map_2\f$ contains indices in a table of interpolation coefficients. cvFloor(y)) and \f$map_2\f$ contains indices in a table of interpolation coefficients.
Output image must be of the same size and depth as input one. Output image must be of the same size and depth as input one.
@note Function textual ID is "org.opencv.core.transform.remap" @note
- Function textual ID is "org.opencv.core.transform.remap"
- Due to current implementation limitations the size of an input and output images should be less than 32767x32767.
@param src Source image. @param src Source image.
@param map1 The first map of either (x,y) points or just x values having the type CV_16SC2, @param map1 The first map of either (x,y) points or just x values having the type CV_16SC2,
@ -1573,8 +1611,6 @@ and #INTER_LINEAR_EXACT are not supported by this function.
borderMode=BORDER_TRANSPARENT, it means that the pixels in the destination image that borderMode=BORDER_TRANSPARENT, it means that the pixels in the destination image that
corresponds to the "outliers" in the source image are not modified by the function. corresponds to the "outliers" in the source image are not modified by the function.
@param borderValue Value used in case of a constant border. By default, it is 0. @param borderValue Value used in case of a constant border. By default, it is 0.
@note
Due to current implementation limitations the size of an input and output images should be less than 32767x32767.
*/ */
GAPI_EXPORTS GMat remap(const GMat& src, const Mat& map1, const Mat& map2, GAPI_EXPORTS GMat remap(const GMat& src, const Mat& map1, const Mat& map2,
int interpolation, int borderMode = BORDER_CONSTANT, int interpolation, int borderMode = BORDER_CONSTANT,

@ -503,10 +503,10 @@ kernel kernelY. The final result is returned.
Supported matrix data types are @ref CV_8UC1, @ref CV_8UC3, @ref CV_16UC1, @ref CV_16SC1, @ref CV_32FC1. Supported matrix data types are @ref CV_8UC1, @ref CV_8UC3, @ref CV_16UC1, @ref CV_16SC1, @ref CV_32FC1.
Output image must have the same type, size, and number of channels as the input image. Output image must have the same type, size, and number of channels as the input image.
@note In case of floating-point computation, rounding to nearest even is procedeed @note
- In case of floating-point computation, rounding to nearest even is procedeed
if hardware supports it (if not - to nearest value). if hardware supports it (if not - to nearest value).
- Function textual ID is "org.opencv.imgproc.filters.sepfilter"
@note Function textual ID is "org.opencv.imgproc.filters.sepfilter"
@param src Source image. @param src Source image.
@param ddepth desired depth of the destination image (the following combinations of src.depth() and ddepth are supported: @param ddepth desired depth of the destination image (the following combinations of src.depth() and ddepth are supported:
@ -545,9 +545,9 @@ anchor.y - 1)`.
Supported matrix data types are @ref CV_8UC1, @ref CV_8UC3, @ref CV_16UC1, @ref CV_16SC1, @ref CV_32FC1. Supported matrix data types are @ref CV_8UC1, @ref CV_8UC3, @ref CV_16UC1, @ref CV_16SC1, @ref CV_32FC1.
Output image must have the same size and number of channels an input image. Output image must have the same size and number of channels an input image.
@note Rounding to nearest even is procedeed if hardware supports it, if not - to nearest. @note
- Rounding to nearest even is procedeed if hardware supports it, if not - to nearest.
@note Function textual ID is "org.opencv.imgproc.filters.filter2D" - Function textual ID is "org.opencv.imgproc.filters.filter2D"
@param src input image. @param src input image.
@param ddepth desired depth of the destination image @param ddepth desired depth of the destination image
@ -582,9 +582,9 @@ algorithms, and so on). If you need to compute pixel sums over variable-size win
Supported input matrix data types are @ref CV_8UC1, @ref CV_8UC3, @ref CV_16UC1, @ref CV_16SC1, @ref CV_32FC1. Supported input matrix data types are @ref CV_8UC1, @ref CV_8UC3, @ref CV_16UC1, @ref CV_16SC1, @ref CV_32FC1.
Output image must have the same type, size, and number of channels as the input image. Output image must have the same type, size, and number of channels as the input image.
@note Rounding to nearest even is procedeed if hardware supports it, if not - to nearest. @note
- Rounding to nearest even is procedeed if hardware supports it, if not - to nearest.
@note Function textual ID is "org.opencv.imgproc.filters.boxfilter" - Function textual ID is "org.opencv.imgproc.filters.boxfilter"
@param src Source image. @param src Source image.
@param dtype the output image depth (-1 to set the input image data type). @param dtype the output image depth (-1 to set the input image data type).
@ -611,9 +611,9 @@ true, borderType)`.
Supported input matrix data types are @ref CV_8UC1, @ref CV_8UC3, @ref CV_16UC1, @ref CV_16SC1, @ref CV_32FC1. Supported input matrix data types are @ref CV_8UC1, @ref CV_8UC3, @ref CV_16UC1, @ref CV_16SC1, @ref CV_32FC1.
Output image must have the same type, size, and number of channels as the input image. Output image must have the same type, size, and number of channels as the input image.
@note Rounding to nearest even is procedeed if hardware supports it, if not - to nearest. @note
- Rounding to nearest even is procedeed if hardware supports it, if not - to nearest.
@note Function textual ID is "org.opencv.imgproc.filters.blur" - Function textual ID is "org.opencv.imgproc.filters.blur"
@param src Source image. @param src Source image.
@param ksize blurring kernel size. @param ksize blurring kernel size.
@ -639,9 +639,9 @@ Output image must have the same type and number of channels an input image.
Supported input matrix data types are @ref CV_8UC1, @ref CV_8UC3, @ref CV_16UC1, @ref CV_16SC1, @ref CV_32FC1. Supported input matrix data types are @ref CV_8UC1, @ref CV_8UC3, @ref CV_16UC1, @ref CV_16SC1, @ref CV_32FC1.
Output image must have the same type, size, and number of channels as the input image. Output image must have the same type, size, and number of channels as the input image.
@note Rounding to nearest even is procedeed if hardware supports it, if not - to nearest. @note
- Rounding to nearest even is procedeed if hardware supports it, if not - to nearest.
@note Function textual ID is "org.opencv.imgproc.filters.gaussianBlur" - Function textual ID is "org.opencv.imgproc.filters.gaussianBlur"
@param src input image; @param src input image;
@param ksize Gaussian kernel size. ksize.width and ksize.height can differ but they both must be @param ksize Gaussian kernel size. ksize.width and ksize.height can differ but they both must be
@ -664,10 +664,10 @@ GAPI_EXPORTS GMat gaussianBlur(const GMat& src, const Size& ksize, double sigmaX
The function smoothes an image using the median filter with the \f$\texttt{ksize} \times The function smoothes an image using the median filter with the \f$\texttt{ksize} \times
\texttt{ksize}\f$ aperture. Each channel of a multi-channel image is processed independently. \texttt{ksize}\f$ aperture. Each channel of a multi-channel image is processed independently.
Output image must have the same type, size, and number of channels as the input image. Output image must have the same type, size, and number of channels as the input image.
@note Rounding to nearest even is procedeed if hardware supports it, if not - to nearest. @note
- Rounding to nearest even is procedeed if hardware supports it, if not - to nearest.
The median filter uses cv::BORDER_REPLICATE internally to cope with border pixels, see cv::BorderTypes The median filter uses cv::BORDER_REPLICATE internally to cope with border pixels, see cv::BorderTypes
- Function textual ID is "org.opencv.imgproc.filters.medianBlur"
@note Function textual ID is "org.opencv.imgproc.filters.medianBlur"
@param src input matrix (image) @param src input matrix (image)
@param ksize aperture linear size; it must be odd and greater than 1, for example: 3, 5, 7 ... @param ksize aperture linear size; it must be odd and greater than 1, for example: 3, 5, 7 ...
@ -685,9 +685,9 @@ shape of a pixel neighborhood over which the minimum is taken:
Erosion can be applied several (iterations) times. In case of multi-channel images, each channel is processed independently. Erosion can be applied several (iterations) times. In case of multi-channel images, each channel is processed independently.
Supported input matrix data types are @ref CV_8UC1, @ref CV_8UC3, @ref CV_16UC1, @ref CV_16SC1, and @ref CV_32FC1. Supported input matrix data types are @ref CV_8UC1, @ref CV_8UC3, @ref CV_16UC1, @ref CV_16SC1, and @ref CV_32FC1.
Output image must have the same type, size, and number of channels as the input image. Output image must have the same type, size, and number of channels as the input image.
@note Rounding to nearest even is procedeed if hardware supports it, if not - to nearest. @note
- Rounding to nearest even is procedeed if hardware supports it, if not - to nearest.
@note Function textual ID is "org.opencv.imgproc.filters.erode" - Function textual ID is "org.opencv.imgproc.filters.erode"
@param src input image @param src input image
@param kernel structuring element used for erosion; if `element=Mat()`, a `3 x 3` rectangular @param kernel structuring element used for erosion; if `element=Mat()`, a `3 x 3` rectangular
@ -709,7 +709,9 @@ The function erodes the source image using the rectangular structuring element w
Erosion can be applied several (iterations) times. In case of multi-channel images, each channel is processed independently. Erosion can be applied several (iterations) times. In case of multi-channel images, each channel is processed independently.
Supported input matrix data types are @ref CV_8UC1, @ref CV_8UC3, @ref CV_16UC1, @ref CV_16SC1, and @ref CV_32FC1. Supported input matrix data types are @ref CV_8UC1, @ref CV_8UC3, @ref CV_16UC1, @ref CV_16SC1, and @ref CV_32FC1.
Output image must have the same type, size, and number of channels as the input image. Output image must have the same type, size, and number of channels as the input image.
@note Rounding to nearest even is procedeed if hardware supports it, if not - to nearest. @note
- Rounding to nearest even is procedeed if hardware supports it, if not - to nearest.
- Function textual ID is "org.opencv.imgproc.filters.erode"
@param src input image @param src input image
@param iterations number of times erosion is applied. @param iterations number of times erosion is applied.
@ -730,9 +732,9 @@ shape of a pixel neighborhood over which the maximum is taken:
Dilation can be applied several (iterations) times. In case of multi-channel images, each channel is processed independently. Dilation can be applied several (iterations) times. In case of multi-channel images, each channel is processed independently.
Supported input matrix data types are @ref CV_8UC1, @ref CV_8UC3, @ref CV_16UC1, @ref CV_16SC1, and @ref CV_32FC1. Supported input matrix data types are @ref CV_8UC1, @ref CV_8UC3, @ref CV_16UC1, @ref CV_16SC1, and @ref CV_32FC1.
Output image must have the same type, size, and number of channels as the input image. Output image must have the same type, size, and number of channels as the input image.
@note Rounding to nearest even is procedeed if hardware supports it, if not - to nearest. @note
- Rounding to nearest even is procedeed if hardware supports it, if not - to nearest.
@note Function textual ID is "org.opencv.imgproc.filters.dilate" - Function textual ID is "org.opencv.imgproc.filters.dilate"
@param src input image. @param src input image.
@param kernel structuring element used for dilation; if elemenat=Mat(), a 3 x 3 rectangular @param kernel structuring element used for dilation; if elemenat=Mat(), a 3 x 3 rectangular
@ -757,9 +759,9 @@ shape of a pixel neighborhood over which the maximum is taken:
Dilation can be applied several (iterations) times. In case of multi-channel images, each channel is processed independently. Dilation can be applied several (iterations) times. In case of multi-channel images, each channel is processed independently.
Supported input matrix data types are @ref CV_8UC1, @ref CV_8UC3, @ref CV_16UC1, @ref CV_16SC1, and @ref CV_32FC1. Supported input matrix data types are @ref CV_8UC1, @ref CV_8UC3, @ref CV_16UC1, @ref CV_16SC1, and @ref CV_32FC1.
Output image must have the same type, size, and number of channels as the input image. Output image must have the same type, size, and number of channels as the input image.
@note Rounding to nearest even is procedeed if hardware supports it, if not - to nearest. @note
- Rounding to nearest even is procedeed if hardware supports it, if not - to nearest.
@note Function textual ID is "org.opencv.imgproc.filters.dilate" - Function textual ID is "org.opencv.imgproc.filters.dilate"
@param src input image. @param src input image.
@param iterations number of times dilation is applied. @param iterations number of times dilation is applied.
@ -780,7 +782,12 @@ basic operations.
Any of the operations can be done in-place. In case of multi-channel images, each channel is Any of the operations can be done in-place. In case of multi-channel images, each channel is
processed independently. processed independently.
@note Function textual ID is "org.opencv.imgproc.filters.morphologyEx" @note
- Function textual ID is "org.opencv.imgproc.filters.morphologyEx"
- The number of iterations is the number of times erosion or dilatation operation will be
applied. For instance, an opening operation (#MORPH_OPEN) with two iterations is equivalent to
apply successively: erode -> erode -> dilate -> dilate
(and not erode -> dilate -> erode -> dilate).
@param src Input image. @param src Input image.
@param op Type of a morphological operation, see #MorphTypes @param op Type of a morphological operation, see #MorphTypes
@ -792,10 +799,6 @@ the kernel center.
@param borderValue Border value in case of a constant border. The default value has a special @param borderValue Border value in case of a constant border. The default value has a special
meaning. meaning.
@sa dilate, erode, getStructuringElement @sa dilate, erode, getStructuringElement
@note The number of iterations is the number of times erosion or dilatation operation will be
applied. For instance, an opening operation (#MORPH_OPEN) with two iterations is equivalent to
apply successively: erode -> erode -> dilate -> dilate
(and not erode -> dilate -> erode -> dilate).
*/ */
GAPI_EXPORTS GMat morphologyEx(const GMat &src, const MorphTypes op, const Mat &kernel, GAPI_EXPORTS GMat morphologyEx(const GMat &src, const MorphTypes op, const Mat &kernel,
const Point &anchor = Point(-1,-1), const Point &anchor = Point(-1,-1),
@ -832,9 +835,9 @@ The second case corresponds to a kernel of:
\f[\vecthreethree{-1}{-2}{-1}{0}{0}{0}{1}{2}{1}\f] \f[\vecthreethree{-1}{-2}{-1}{0}{0}{0}{1}{2}{1}\f]
@note Rounding to nearest even is procedeed if hardware supports it, if not - to nearest. @note
- Rounding to nearest even is procedeed if hardware supports it, if not - to nearest.
@note Function textual ID is "org.opencv.imgproc.filters.sobel" - Function textual ID is "org.opencv.imgproc.filters.sobel"
@param src input image. @param src input image.
@param ddepth output image depth, see @ref filter_depths "combinations"; in the case of @param ddepth output image depth, see @ref filter_depths "combinations"; in the case of
@ -883,11 +886,10 @@ The second case corresponds to a kernel of:
\f[\vecthreethree{-1}{-2}{-1}{0}{0}{0}{1}{2}{1}\f] \f[\vecthreethree{-1}{-2}{-1}{0}{0}{0}{1}{2}{1}\f]
@note First returned matrix correspons to dx derivative while the second one to dy. @note
- First returned matrix correspons to dx derivative while the second one to dy.
@note Rounding to nearest even is procedeed if hardware supports it, if not - to nearest. - Rounding to nearest even is procedeed if hardware supports it, if not - to nearest.
- Function textual ID is "org.opencv.imgproc.filters.sobelxy"
@note Function textual ID is "org.opencv.imgproc.filters.sobelxy"
@param src input image. @param src input image.
@param ddepth output image depth, see @ref filter_depths "combinations"; in the case of @param ddepth output image depth, see @ref filter_depths "combinations"; in the case of
@ -1010,11 +1012,11 @@ described in @cite Shi94
The function can be used to initialize a point-based tracker of an object. The function can be used to initialize a point-based tracker of an object.
@note If the function is called with different values A and B of the parameter qualityLevel , and @note
- If the function is called with different values A and B of the parameter qualityLevel , and
A \> B, the vector of returned corners with qualityLevel=A will be the prefix of the output vector A \> B, the vector of returned corners with qualityLevel=A will be the prefix of the output vector
with qualityLevel=B . with qualityLevel=B .
- Function textual ID is "org.opencv.imgproc.feature.goodFeaturesToTrack"
@note Function textual ID is "org.opencv.imgproc.feature.goodFeaturesToTrack"
@param image Input 8-bit or floating-point 32-bit, single-channel image. @param image Input 8-bit or floating-point 32-bit, single-channel image.
@param maxCorners Maximum number of corners to return. If there are more corners than are found, @param maxCorners Maximum number of corners to return. If there are more corners than are found,
@ -1059,9 +1061,9 @@ The function equalizes the histogram of the input image using the following algo
- Transform the image using \f$H'\f$ as a look-up table: \f$\texttt{dst}(x,y) = H'(\texttt{src}(x,y))\f$ - Transform the image using \f$H'\f$ as a look-up table: \f$\texttt{dst}(x,y) = H'(\texttt{src}(x,y))\f$
The algorithm normalizes the brightness and increases the contrast of the image. The algorithm normalizes the brightness and increases the contrast of the image.
@note The returned image is of the same size and type as input. @note
- The returned image is of the same size and type as input.
@note Function textual ID is "org.opencv.imgproc.equalizeHist" - Function textual ID is "org.opencv.imgproc.equalizeHist"
@param src Source 8-bit single channel image. @param src Source 8-bit single channel image.
*/ */
@ -1121,8 +1123,9 @@ image of labels ( @ref CV_32SC1 ). If #RETR_FLOODFILL -- @ref CV_32SC1 supports
contours are extracted from the image ROI and then they should be analyzed in the whole image contours are extracted from the image ROI and then they should be analyzed in the whole image
context. context.
@return GArray of detected contours. Each contour is stored as a GArray of points. @return
@return Optional output GArray of cv::Vec4i, containing information about the image topology. - GArray of detected contours. Each contour is stored as a GArray of points.
- Optional output GArray of cv::Vec4i, containing information about the image topology.
It has as many elements as the number of contours. For each i-th contour contours[i], the elements It has as many elements as the number of contours. For each i-th contour contours[i], the elements
hierarchy[i][0] , hierarchy[i][1] , hierarchy[i][2] , and hierarchy[i][3] are set to 0-based hierarchy[i][0] , hierarchy[i][1] , hierarchy[i][2] , and hierarchy[i][3] are set to 0-based
indices in contours of the next and previous contours at the same hierarchical level, the first indices in contours of the next and previous contours at the same hierarchical level, the first
@ -1146,14 +1149,14 @@ of gray-scale image.
The function calculates and returns the minimal up-right bounding rectangle for the specified The function calculates and returns the minimal up-right bounding rectangle for the specified
point set or non-zero pixels of gray-scale image. point set or non-zero pixels of gray-scale image.
@note Function textual ID is "org.opencv.imgproc.shape.boundingRectMat" @note
- Function textual ID is "org.opencv.imgproc.shape.boundingRectMat"
- In case of a 2D points' set given, Mat should be 2-dimensional, have a single row or column
if there are 2 channels, or have 2 columns if there is a single channel. Mat should have either
@ref CV_32S or @ref CV_32F depth
@param src Input gray-scale image @ref CV_8UC1; or input set of @ref CV_32S or @ref CV_32F @param src Input gray-scale image @ref CV_8UC1; or input set of @ref CV_32S or @ref CV_32F
2D points stored in Mat. 2D points stored in Mat.
@note In case of a 2D points' set given, Mat should be 2-dimensional, have a single row or column
if there are 2 channels, or have 2 columns if there is a single channel. Mat should have either
@ref CV_32S or @ref CV_32F depth
*/ */
GAPI_EXPORTS GOpaque<Rect> boundingRect(const GMat& src); GAPI_EXPORTS GOpaque<Rect> boundingRect(const GMat& src);
@ -1199,14 +1202,13 @@ The algorithm is based on the M-estimator ( <http://en.wikipedia.org/wiki/M-esti
that iteratively fits the line using the weighted least-squares algorithm. After each iteration the that iteratively fits the line using the weighted least-squares algorithm. After each iteration the
weights \f$w_i\f$ are adjusted to be inversely proportional to \f$\rho(r_i)\f$ . weights \f$w_i\f$ are adjusted to be inversely proportional to \f$\rho(r_i)\f$ .
@note Function textual ID is "org.opencv.imgproc.shape.fitLine2DMat" @note
- Function textual ID is "org.opencv.imgproc.shape.fitLine2DMat"
- In case of an N-dimentional points' set given, Mat should be 2-dimensional, have a single row
or column if there are N channels, or have N columns if there is a single channel.
@param src Input set of 2D points stored in one of possible containers: Mat, @param src Input set of 2D points stored in one of possible containers: Mat,
std::vector<cv::Point2i>, std::vector<cv::Point2f>, std::vector<cv::Point2d>. std::vector<cv::Point2i>, std::vector<cv::Point2f>, std::vector<cv::Point2d>.
@note In case of an N-dimentional points' set given, Mat should be 2-dimensional, have a single row
or column if there are N channels, or have N columns if there is a single channel.
@param distType Distance used by the M-estimator, see #DistanceTypes. @ref DIST_USER @param distType Distance used by the M-estimator, see #DistanceTypes. @ref DIST_USER
and @ref DIST_C are not suppored. and @ref DIST_C are not suppored.
@param param Numerical parameter ( C ) for some types of distances. If it is 0, an optimal value @param param Numerical parameter ( C ) for some types of distances. If it is 0, an optimal value
@ -1272,14 +1274,13 @@ The algorithm is based on the M-estimator ( <http://en.wikipedia.org/wiki/M-esti
that iteratively fits the line using the weighted least-squares algorithm. After each iteration the that iteratively fits the line using the weighted least-squares algorithm. After each iteration the
weights \f$w_i\f$ are adjusted to be inversely proportional to \f$\rho(r_i)\f$ . weights \f$w_i\f$ are adjusted to be inversely proportional to \f$\rho(r_i)\f$ .
@note Function textual ID is "org.opencv.imgproc.shape.fitLine3DMat" @note
- Function textual ID is "org.opencv.imgproc.shape.fitLine3DMat"
- In case of an N-dimentional points' set given, Mat should be 2-dimensional, have a single row
or column if there are N channels, or have N columns if there is a single channel.
@param src Input set of 3D points stored in one of possible containers: Mat, @param src Input set of 3D points stored in one of possible containers: Mat,
std::vector<cv::Point3i>, std::vector<cv::Point3f>, std::vector<cv::Point3d>. std::vector<cv::Point3i>, std::vector<cv::Point3f>, std::vector<cv::Point3d>.
@note In case of an N-dimentional points' set given, Mat should be 2-dimensional, have a single row
or column if there are N channels, or have N columns if there is a single channel.
@param distType Distance used by the M-estimator, see #DistanceTypes. @ref DIST_USER @param distType Distance used by the M-estimator, see #DistanceTypes. @ref DIST_USER
and @ref DIST_C are not suppored. and @ref DIST_C are not suppored.
@param param Numerical parameter ( C ) for some types of distances. If it is 0, an optimal value @param param Numerical parameter ( C ) for some types of distances. If it is 0, an optimal value

@ -150,8 +150,9 @@ G_TYPED_KERNEL(GBackgroundSubtractor, <GMat(GMat, BackgroundSubtractorParams)>,
@param tryReuseInputImage put ROI of input image into the pyramid if possible. You can pass false @param tryReuseInputImage put ROI of input image into the pyramid if possible. You can pass false
to force data copying. to force data copying.
@return output pyramid. @return
@return number of levels in constructed pyramid. Can be less than maxLevel. - output pyramid.
- number of levels in constructed pyramid. Can be less than maxLevel.
*/ */
GAPI_EXPORTS std::tuple<GArray<GMat>, GScalar> GAPI_EXPORTS std::tuple<GArray<GMat>, GScalar>
buildOpticalFlowPyramid(const GMat &img, buildOpticalFlowPyramid(const GMat &img,
@ -198,11 +199,12 @@ by number of pixels in a window; if this value is less than minEigThreshold, the
feature is filtered out and its flow is not processed, so it allows to remove bad points and get a feature is filtered out and its flow is not processed, so it allows to remove bad points and get a
performance boost. performance boost.
@return GArray of 2D points (with single-precision floating-point coordinates) @return
- GArray of 2D points (with single-precision floating-point coordinates)
containing the calculated new positions of input features in the second image. containing the calculated new positions of input features in the second image.
@return status GArray (of unsigned chars); each element of the vector is set to 1 if - status GArray (of unsigned chars); each element of the vector is set to 1 if
the flow for the corresponding features has been found, otherwise, it is set to 0. the flow for the corresponding features has been found, otherwise, it is set to 0.
@return GArray of errors (doubles); each element of the vector is set to an error for the - GArray of errors (doubles); each element of the vector is set to an error for the
corresponding feature, type of the error measure can be set in flags parameter; if the flow wasn't corresponding feature, type of the error measure can be set in flags parameter; if the flow wasn't
found then the error is not defined (use the status parameter to find such cases). found then the error is not defined (use the status parameter to find such cases).
*/ */

Loading…
Cancel
Save