The class computes the disparity map using block matching algorithm. The class also does some pre- and post- processing steps: sobel prefilter (if use \texttt{PREFILTER\_XSOBEL} preset) and textureless threshold post processing (if \texttt{avergeTexThreshold}$>$ 0). If \texttt{avergeTexThreshold = 0}post procesing is disabled, otherwise disparity is set 0 in each point \texttt{(x, y)} where for left image $\sum HorizontalGradiensInWindow(x, y, winSize) < (winSize \cdot winSize)\cdot avergeTexThreshold$ i.e. input left image is low textured.
This class computes the disparity map using block matching algorithm. The class also performs pre- and post- filtering steps: sobel prefiltering (if PREFILTER\_XSOBEL flag is set) and low textureness filtering (if averageTexThreshols$>$ 0). If \texttt{avergeTexThreshold = 0}low textureness filtering is disabled, otherwise disparity is set to 0 in each point \texttt{(x, y)} where for left image $\sum HorizontalGradiensInWindow(x, y, winSize) < (winSize \cdot winSize)\cdot avergeTexThreshold$ i.e. input left image is low textured.
Some heuristics that tries to estmate if current GPU will be faster then CPU in this algorithm. It queries current active device.
Some heuristics that tries to estmate if the current GPU will be faster then CPU in this algorithm. It queries current active device.
\cvdefCpp{
bool StereoBM\_GPU::checkIfGpuCallReasonable();
@ -127,6 +127,16 @@ public:
The class implements Pedro F. Felzenszwalb algorithm \cite{felzenszwalb_bp}. It can compute own data cost (using truncated linear model) or use user-provided data cost.
\textbf{Please note:}\texttt{StereoBeliefPropagation} requires a lot of memory:
For more details please see \cite{felzenszwalb_bp}.
By default \texttt{StereoBeliefPropagation} uses floating-point arithmetics and \texttt{CV\_32FC1} type for messages. But also it can use fixed-point arithmetics and \texttt{CV\_16SC1} type for messages for better perfomance.
By default \texttt{StereoBeliefPropagation} uses floating-point arithmetics and \texttt{CV\_32FC1} type for messages. But also it can use fixed-point arithmetics and \texttt{CV\_16SC1} type for messages for better perfomance. To avoid overflow in this case, the parameters must satisfy
Some heuristics that tries to compute parameters (\texttt{ndisp}, \texttt{iters} and \texttt{levels}) for specified image size (\texttt{width} and \texttt{height}).
Some heuristics that tries to compute recommended parameters (\texttt{ndisp}, \texttt{iters} and \texttt{levels}) for specified image size (\texttt{width} and \texttt{height}).
\cvdefCpp{
void StereoBeliefPropagation::estimateRecommendedParams(\par int width, int height, int\& ndisp, int\& iters, int\& levels);
@ -248,7 +261,7 @@ public:
};
\end{lstlisting}
The class implements Q. Yang algorithm \cite{qx_csbp}. \texttt{StereoConstantSpaceBP}can use global minimum or local minimum selection method. By default it use local minimum selection method (\texttt{use\_local\_init\_data\_cost = true}).
The class implements Q. Yang algorithm \cite{qx_csbp}. \texttt{StereoConstantSpaceBP}supports both local minimum and global minimum data cost initialization algortihms. For more details please see the paper. By default local algorithm is used, and to enable global algorithm set \texttt{use\_local\_init\_data\_cost} to false.
By default \texttt{StereoConstantSpaceBP} uses floating-point arithmetics and \texttt{CV\_32FC1} type for messages. But also it can use fixed-point arithmetics and \texttt{CV\_16SC1} type for messages for better perfomance.
By default \texttt{StereoConstantSpaceBP} uses floating-point arithmetics and \texttt{CV\_32FC1} type for messages. But also it can use fixed-point arithmetics and \texttt{CV\_16SC1} type for messages for better perfomance. To avoid overflow in this case, the parameters must satisfy
DisparityBilateralFilter::DisparityBilateralFilter(\par int ndisp = DEFAULT\_NDISP, int radius = DEFAULT\_RADIUS, \par int iters = DEFAULT\_ITERS);\newline
@ -350,8 +366,8 @@ DisparityBilateralFilter::DisparityBilateralFilter(\par int ndisp, int radius, i
\cvarg{ndisp}{Number of disparities.}
\cvarg{radius}{Filter radius.}
\cvarg{iters}{Number of iterations.}
\cvarg{edge\_threshold}{Truncation of data continuity.}
\cvarg{max\_disc\_threshold}{Truncation of disparity continuity.}
\cvarg{edge\_threshold}{Threshold for edges.}
\cvarg{max\_disc\_threshold}{Constant to reject outliers.}
\cvarg{trainDescs}{Train set of descriptors. This will not be added to train descriptors collection stored in class object.}
\cvarg{trainIdx}{One row \texttt{CV\_32SC1} matrix. Will contain the best train index for each query. If some query descriptors masked out in \texttt{mask} it will contain -1.}
\cvarg{distance}{One row \texttt{CV\_32FC1} matrix. Will contain the best distance for each query. If some query descriptors masked out in \texttt{mask} it will contain \texttt{FLT\_MAX}.}
\cvarg{trainIdx}{One row \texttt{CV\_32SC1} matrix. Will contain the best train index for each query. If some query descriptors are masked out in \texttt{mask} it will contain -1.}
\cvarg{distance}{One row \texttt{CV\_32FC1} matrix. Will contain the best distance for each query. If some query descriptors are masked out in \texttt{mask} it will contain \texttt{FLT\_MAX}.}
\cvarg{mask}{Mask specifying permissible matches between input query and train matrices of descriptors.}
\cvarg{trainCollection}{\texttt{GpuMat} containing train collection. It can be obtained from train descriptors collection that was set using \texttt{add} method by \hyperref[cppfunc.gpu.BruteForceMatcher.makeGpuCollection]{makeGpuCollection}. Or it can contain user defined collection. It must be one row matrix, each element is a \texttt{DevMem2D} that points to one train descriptors matrix.}
\cvarg{trainIdx}{One row \texttt{CV\_32SC1} matrix. Will contain the best train index for each query. If some query descriptors masked out in \texttt{maskCollection} it will contain -1.}
\cvarg{imgIdx}{One row \texttt{CV\_32SC1} matrix. Will contain image train index for each query. If some query descriptors masked out in \texttt{maskCollection} it will contain -1.}
\cvarg{distance}{One row \texttt{CV\_32FC1} matrix. Will contain the best distance for each query. If some query descriptors masked out in \texttt{maskCollection} it will contain \texttt{FLT\_MAX}.}
\cvarg{trainIdx}{One row \texttt{CV\_32SC1} matrix. Will contain the best train index for each query. If some query descriptors are masked out in \texttt{maskCollection} it will contain -1.}
\cvarg{imgIdx}{One row \texttt{CV\_32SC1} matrix. Will contain image train index for each query. If some query descriptors are masked out in \texttt{maskCollection} it will contain -1.}
\cvarg{distance}{One row \texttt{CV\_32FC1} matrix. Will contain the best distance for each query. If some query descriptors are masked out in \texttt{maskCollection} it will contain \texttt{FLT\_MAX}.}
\cvarg{maskCollection}{\texttt{GpuMat} containing set of masks. It can be obtained from \texttt{std::vector<GpuMat>} by \hyperref[cppfunc.gpu.BruteForceMatcher.makeGpuCollection]{makeGpuCollection}. Or it can contain user defined mask set. It must be empty matrix or one row matrix, each element is a \texttt{PtrStep} that points to one mask.}
\cvarg{trainDescs}{Train set of descriptors. This will not be added to train descriptors collection stored in class object.}
\cvarg{trainIdx}{Matrix with $\texttt{nQueries}\times\texttt{k}$ size and \texttt{CV\_32SC1} type. \texttt{trainIdx.at<int>(queryIdx, i)} will contain index of the i'th best trains. If some query descriptors masked out in \texttt{mask} it will contain -1.}
\cvarg{distance}{Matrix with $\texttt{nQuery}\times\texttt{k}$ and \texttt{CV\_32FC1} type. Will contain distance for each query and the i'th best trains. If some query descriptors masked out in \texttt{mask} it will contain \texttt{FLT\_MAX}.}
\cvarg{trainIdx}{Matrix with $\texttt{nQueries}\times\texttt{k}$ size and \texttt{CV\_32SC1} type. \texttt{trainIdx.at<int>(queryIdx, i)} will contain index of the i'th best trains. If some query descriptors are masked out in \texttt{mask} it will contain -1.}
\cvarg{distance}{Matrix with $\texttt{nQuery}\times\texttt{k}$ and \texttt{CV\_32FC1} type. Will contain distance for each query and the i'th best trains. If some query descriptors are masked out in \texttt{mask} it will contain \texttt{FLT\_MAX}.}
\cvarg{allDist}{Buffer to store all distances between query descriptors and train descriptors. It will have $\texttt{nQuery}\times\texttt{nTrain}$ size and \texttt{CV\_32FC1} type. \texttt{allDist.at<float>(queryIdx, trainIdx)} will contain \texttt{FLT\_MAX}, if \texttt{trainIdx} is one from k best, otherwise it will contain distance between \texttt{queryIdx} and \texttt{trainIdx} descriptors.}
\cvarg{k}{Number of the best matches will be found per each query descriptor (or less if it's not possible).}
\cvarg{mask}{Mask specifying permissible matches between input query and train matrices of descriptors.}
\textbf{Please note:} This class doesn't allocate memory for destination image. Usually this class is used inside \hyperref[class.gpu.FilterEngine]{cv::gpu::FilterEngine\_GPU}.
The base class for linear or non-linear filters that processes columns of 2D arrays. Such filters are used for the "vertical" filtering passes in separable filters.
@ -34,6 +36,8 @@ public:
};
\end{lstlisting}
\textbf{Please note:} This class doesn't allocate memory for destination image. Usually this class is used inside \hyperref[class.gpu.FilterEngine]{cv::gpu::FilterEngine\_GPU}.
\textbf{Please note:} This class doesn't allocate memory for destination image. Usually this class is used inside \hyperref[class.gpu.FilterEngine]{cv::gpu::FilterEngine\_GPU}.
\texttt{FilterEngine\_GPU} can process a rectangular sub-region of an image. By default, if \texttt{roi == Rect(0,0,-1,-1)}, \texttt{FilterEngine\_GPU} process inner region of image (\texttt{Rect(anchor.x, anchor.y, src\_size.width - ksize.width, src\_size.height - ksize.height)}), because some filters doesn't check indexes outside the image for better perfomace. Which filters supports processing the whole image and which not and image type limitations see below.
\texttt{FilterEngine\_GPU} can process a rectangular sub-region of an image. By default, if \texttt{roi == Rect(0,0,-1,-1)}, \texttt{FilterEngine\_GPU} processes inner region of image (\texttt{Rect(anchor.x, anchor.y, src\_size.width - ksize.width, src\_size.height - ksize.height)}), because some filters doesn't check if indices are outside the image for better perfomace. See below which filters supports processing the whole image and which not and image type limitations.
The GPU filters doesn't support the in-place mode.
\textbf{Please note:}The GPU filters doesn't support the in-place mode.
\cvarg{iterations}{Number of times dilation to be applied.}
\end{description}
This filter doesn't check indexes outside the image, so it can process only inner area.
\textbf{Please note:} This filter doesn't check out of border accesses, so only proper submatrix of bigger matrix have to be passed to it.
See also: \cvCppCross{dilate}, \hyperref[cppfunc.gpu.createMorphologyFilter]{cv::gpu::createMorphologyFilter\_GPU}.
@ -313,7 +319,7 @@ void morphologyEx(const GpuMat\& src, GpuMat\& dst, int op, \par const Mat\& ker
\cvarg{iterations}{Number of times erosion and dilation to be applied.}
\end{description}
This filter doesn't check indexes outside the image, so it can process only inner area.
\textbf{Please note:} This filter doesn't check out of border accesses, so only proper submatrix of bigger matrix have to be passed to it.
See also: \cvCppCross{morphologyEx}.
@ -336,7 +342,7 @@ Ptr<BaseFilter\_GPU> getLinearFilter\_GPU(int srcType, int dstType, \par const M
\cvarg{anchor}{Anchor point. The default value Point(-1, -1) means that the anchor is at the kernel center.}
\end{description}
This filter doesn't check indexes outside the image, so it can process only inner area.
\textbf{Please note:} This filter doesn't check out of border accesses, so only proper submatrix of bigger matrix have to be passed to it.
See also: \cvCppCross{createLinearFilter}.
@ -356,7 +362,7 @@ void filter2D(const GpuMat\& src, GpuMat\& dst, int ddepth, \par const Mat\& ker
\cvarg{anchor}{Anchor of the kernel that indicates the relative position of a filtered point within the kernel. The anchor should lie within the kernel. The special default value (-1,-1) means that the anchor is at the kernel center.}
\end{description}
This filter doesn't check indexes outside the image, so it can process only inner area.
\textbf{Please note:} This filter doesn't check out of border accesses, so only proper submatrix of bigger matrix have to be passed to it.
See also: \cvCppCross{filter2D}, \hyperref[cppfunc.gpu.createLinearFilter]{cv::gpu::createLinearFilter\_GPU}.
@ -376,7 +382,7 @@ void Laplacian(const GpuMat\& src, GpuMat\& dst, int ddepth, \par int ksize = 1,
\cvarg{scale}{Optional scale factor for the computed Laplacian values (by default, no scaling is applied, see \cvCppCross{getDerivKernels}).}
\end{description}
This filter doesn't check indexes outside the image, so it can process only inner area.
\textbf{Please note:} This filter doesn't check out of border accesses, so only proper submatrix of bigger matrix have to be passed to it.
See also: \cvCppCross{Laplacian}, \cvCppCross{gpu::filter2D}.
@ -396,7 +402,7 @@ Ptr<BaseRowFilter\_GPU> getLinearRowFilter\_GPU(int srcType, \par int bufType, c
\cvarg{borderType}{Pixel extrapolation method; see \cvCppCross{borderInterpolate}. About limitation see below.}
\end{description}
There are two version of algorithm: NPP and OpenCV. NPP calls when \texttt{srcType == CV\_8UC1} or \texttt{srcType == CV\_8UC4} and \texttt{bufType == srcType}, otherwise calls OpenCV version. NPP supports only \texttt{BORDER\_CONSTANT} border type and doesn't check indexes outside image. OpenCV version supports only \texttt{CV\_32F} buffer depth and \texttt{BORDER\_REFLECT101}, \texttt{BORDER\_REPLICATE} and \texttt{BORDER\_CONSTANT} border types and checks indexes outside image.
There are two version of algorithm: NPP and OpenCV. NPP calls when \texttt{srcType == CV\_8UC1} or \texttt{srcType == CV\_8UC4} and \texttt{bufType == srcType}, otherwise calls OpenCV version. NPP supports only \texttt{BORDER\_CONSTANT} border type and doesn't check indices outside image. OpenCV version supports only \texttt{CV\_32F} buffer depth and \texttt{BORDER\_REFLECT101}, \texttt{BORDER\_REPLICATE} and \texttt{BORDER\_CONSTANT} border types and checks indices outside image.
See also: \hyperref[cppfunc.gpu.getLinearColumnFilter]{cv::gpu::getLinearColumnFilter\_GPU}, \cvCppCross{createSeparableLinearFilter}.
@ -416,7 +422,7 @@ Ptr<BaseColumnFilter\_GPU> getLinearColumnFilter\_GPU(int bufType, \par int dstT
\cvarg{borderType}{Pixel extrapolation method; see \cvCppCross{borderInterpolate}. About limitation see below.}
\end{description}
There are two version of algorithm: NPP and OpenCV. NPP calls when \texttt{dstType == CV\_8UC1} or \texttt{dstType == CV\_8UC4} and \texttt{bufType == dstType}, otherwise calls OpenCV version. NPP supports only \texttt{BORDER\_CONSTANT} border type and doesn't check indexes outside image. OpenCV version supports only \texttt{CV\_32F} buffer depth and \texttt{BORDER\_REFLECT101}, \texttt{BORDER\_REPLICATE} and \texttt{BORDER\_CONSTANT} border types and checks indexes outside image.\newline
There are two version of algorithm: NPP and OpenCV. NPP calls when \texttt{dstType == CV\_8UC1} or \texttt{dstType == CV\_8UC4} and \texttt{bufType == dstType}, otherwise calls OpenCV version. NPP supports only \texttt{BORDER\_CONSTANT} border type and doesn't check indices outside image. OpenCV version supports only \texttt{CV\_32F} buffer depth and \texttt{BORDER\_REFLECT101}, \texttt{BORDER\_REPLICATE} and \texttt{BORDER\_CONSTANT} border types and checks indices outside image.\newline
See also: \hyperref[cppfunc.gpu.getLinearRowFilter]{cv::gpu::getLinearRowFilter\_GPU}, \cvCppCross{createSeparableLinearFilter}.