Merge pull request #8951 from hrnr:akaze_part2

[GSOC] Speeding-up AKAZE, part #2 (#8951)

* feature2d: instrument more functions used in AKAZE

* rework Compute_Determinant_Hessian_Response

* this takes 84% of time of Feature_Detection
* run everything in parallel
* compute Scharr kernels just once
* compute sigma more efficiently
* allocate all matrices in evolution without zeroing

* features2d: add one bigger image to tests

* now test have images: 600x768, 900x600 and 1385x700 to cover different resolutions

* explicitly zero Lx and Ly

* add Lflow and Lstep to evolution as in original AKAZE code

* reworked computing keypoints orientation

integrated faster function from https://github.com/h2suzuki/fast_akaze

* use standard fastAtan2 instead of getAngle

* compute keypoints orientation in parallel

* fix visual studio warnings

* replace some wrapped functions with direct calls to OpenCV functions

* improved readability for people familiar with opencv
* do not same image twice in base level

* rework diffusity stencil

* use one pass stencil for diffusity from https://github.com/h2suzuki/fast_akaze
* improve locality in Create_Scale_Space

* always compute determinat od hessian and spacial derivatives

* this needs to be computed always as we need derivatives while computing descriptors
* fixed tests of AKAZE with KAZE descriptors which have been affected by this

Currently it computes all first and second order derivatives together and the determiant of the hessian. For descriptors it would be enough to compute just first order derivates, but it is not probably worth it optimize for scenario where descriptors and keypoints are computed separately, since it is already very inefficient. When computing keypoint and descriptors together it is faster to do it the current way (preserves locality).

* parallelize non linear diffusion computation

* do multiplication right in the nlp diffusity kernel

* rework kfactor computation

* get rid of sharing buffers when creating scale space pyramid, the performace impact is neglegible

* features2d: initialize TBB scheduler in perf tests

* ensures more stable output
* more reasonable profiles, since the first call of parallel_for_ is not getting big performace hit

* compute_kfactor: interleave finding of maximum and computing distance

* no need to go twice through the data

* start to use UMats in AKAZE to leverage OpenCl in the future

* fixed bug that prevented computing determinant for scale pyramid of size 1 (just the base image)
* all descriptors now support writing to uninitialized memory
* use InputArray and OutputArray for input image and descriptors, allows to make use UMAt that user passes to us

* enable use of all existing ocl paths in AKAZE

* all parts that uses ocl-enabled functions should use ocl by now

* imgproc: fix dispatching of IPP version when OCL is disabled

* when OCL is disabled IPP version should be always prefered (even when the dst is UMat)

* get rid of copy in DeterminantHessian response

* this slows CPU version considerably
* do no run in parallel when running with OCL

* store derivations as UMat in pyramid

* enables OCL path computing of determint hessian
* will allow to compute descriptors on GPU in the future

* port diffusivity to OCL

* diffusivity itself is not a blocker, but this saves us downloading and uploading derivations

* implement kernel for nonlinear scalar diffusion step

* download the pyramid from GPU just once

we don't want to downlaod matrices ad hoc from gpu when the function in AKAZE needs it. There is a HUGE mapping overhead and without shared memory support a LOT of unnecessary transfers.

This maps/downloads matrices just once.

* fix bug with uninitialized values in non linear diffusion

* this was causing spurious segfaults in stitching tests due to propagation of NaNs
* added new test, which checks for NaNs (added new debug asserts for NaNs)
* valgrind now says everything is ok

* add nonlinear diffusion step OCL implementation

* Lt in pyramid changed to UMat, it will be downlaoded from GPU along with Lx, Ly
* fix bug in pm_g2 kernel. OpenCV mangles dimensions passed to OpenCL, so we need to check for boundaries in each OCL kernel.

* port computing of determinant to OCL

* computing of determinant is not a blocker, but with this change we don't need to download all spatial derivatives to CPU, we only download determinant
* make Ldet in the pyramid UMat, download it from CPU together with the other parts of the pyramid
* add profiling macros

* fix visual studio warning

* instrument non_linear_diffusion

* remove changes I have made to TEvolution

* TEvolution is used only in KAZE now

* Revert "features2d: initialize TBB scheduler in perf tests"

This reverts commit ba81e2a711.
pull/9286/head
Jiri Horner 7 years ago committed by Alexander Alekhin
parent 2959e7aba9
commit bb6496d9e5
  1. 3
      modules/features2d/perf/perf_feature2d.hpp
  2. 9
      modules/features2d/src/akaze.cpp
  3. 1032
      modules/features2d/src/kaze/AKAZEFeatures.cpp
  4. 39
      modules/features2d/src/kaze/AKAZEFeatures.h
  5. 1
      modules/features2d/src/kaze/TEvolution.h
  6. 33
      modules/features2d/src/kaze/nldiffusion_functions.cpp
  7. 8
      modules/features2d/src/kaze/nldiffusion_functions.h
  8. 2
      modules/features2d/src/keypoint.cpp
  9. 122
      modules/features2d/src/opencl/akaze.cl
  10. 47
      modules/features2d/test/test_akaze.cpp
  11. 4
      modules/features2d/test/test_descriptors_invariance.cpp
  12. 21
      modules/features2d/test/test_detectors_regression.cpp
  13. 2
      modules/imgproc/src/smooth.cpp

@ -35,7 +35,8 @@ typedef perf::TestBaseWithParam<Feature2DType_String_t> feature2d;
#define TEST_IMAGES testing::Values(\
"cv/detectors_descriptors_evaluation/images_datasets/leuven/img1.png",\
"stitching/a3.png")
"stitching/a3.png", \
"stitching/s2.jpg")
static inline Ptr<Feature2D> getFeature2D(Feature2DType type)
{

@ -210,13 +210,10 @@ namespace cv
if( descriptors.needed() )
{
Mat desc;
impl.Compute_Descriptors(keypoints, desc);
// TODO optimize this copy
desc.copyTo(descriptors);
impl.Compute_Descriptors(keypoints, descriptors);
CV_Assert((!desc.rows || desc.cols == descriptorSize()));
CV_Assert((!desc.rows || (desc.type() == descriptorType())));
CV_Assert((descriptors.empty() || descriptors.cols() == descriptorSize()));
CV_Assert((descriptors.empty() || (descriptors.type() == descriptorType())));
}
}

File diff suppressed because it is too large Load Diff

@ -12,11 +12,40 @@
/* ************************************************************************* */
// Includes
#include "AKAZEConfig.h"
#include "TEvolution.h"
namespace cv
{
/// A-KAZE nonlinear diffusion filtering evolution
struct Evolution
{
Evolution() {
etime = 0.0f;
esigma = 0.0f;
octave = 0;
sublevel = 0;
sigma_size = 0;
}
UMat Lx, Ly; ///< First order spatial derivatives
UMat Lt; ///< Evolution image
UMat Lsmooth; ///< Smoothed image, used only for computing determinant, released afterwards
UMat Ldet; ///< Detector response
// the same as above, holding CPU mapping to UMats above
Mat Mx, My;
Mat Mt;
Mat Mdet;
Size size; ///< Size of the layer
float etime; ///< Evolution time
float esigma; ///< Evolution sigma. For linear diffusion t = sigma^2 / 2
int octave; ///< Image octave
int sublevel; ///< Image sublevel in each octave
int sigma_size; ///< Integer esigma. For computing the feature detector responses
float octave_ratio; ///< Scaling ratio of this octave. ratio = 2^octave
};
/* ************************************************************************* */
// AKAZE Class Declaration
class AKAZEFeatures {
@ -24,7 +53,7 @@ class AKAZEFeatures {
private:
AKAZEOptions options_; ///< Configuration options for AKAZE
std::vector<TEvolution> evolution_; ///< Vector of nonlinear diffusion evolution
std::vector<Evolution> evolution_; ///< Vector of nonlinear diffusion evolution
/// FED parameters
int ncycles_; ///< Number of cycles
@ -44,16 +73,14 @@ public:
/// Scale Space methods
void Allocate_Memory_Evolution();
int Create_Nonlinear_Scale_Space(const cv::Mat& img);
void Create_Nonlinear_Scale_Space(InputArray img);
void Feature_Detection(std::vector<cv::KeyPoint>& kpts);
void Compute_Determinant_Hessian_Response(void);
void Compute_Multiscale_Derivatives(void);
void Find_Scale_Space_Extrema(std::vector<cv::KeyPoint>& kpts);
void Do_Subpixel_Refinement(std::vector<cv::KeyPoint>& kpts);
/// Feature description methods
void Compute_Descriptors(std::vector<cv::KeyPoint>& kpts, cv::Mat& desc);
static void Compute_Main_Orientation(cv::KeyPoint& kpt, const std::vector<TEvolution>& evolution_);
void Compute_Descriptors(std::vector<cv::KeyPoint>& kpts, OutputArray desc);
void Compute_Keypoints_Orientation(std::vector<cv::KeyPoint>& kpts) const;
};

@ -28,6 +28,7 @@ struct TEvolution
Mat Lt; ///< Evolution image
Mat Lsmooth; ///< Smoothed image
Mat Ldet; ///< Detector response
float etime; ///< Evolution time
float esigma; ///< Evolution sigma. For linear diffusion t = sigma^2 / 2
int octave; ///< Image octave

@ -91,7 +91,11 @@ void image_derivatives_scharr(const cv::Mat& src, cv::Mat& dst, int xorder, int
* @param dst Output image
* @param k Contrast factor parameter
*/
void pm_g1(const cv::Mat& Lx, const cv::Mat& Ly, cv::Mat& dst, float k) {
void pm_g1(InputArray _Lx, InputArray _Ly, OutputArray _dst, float k) {
_dst.create(_Lx.size(), _Lx.type());
Mat Lx = _Lx.getMat();
Mat Ly = _Ly.getMat();
Mat dst = _dst.getMat();
Size sz = Lx.size();
float inv_k = 1.0f / (k*k);
@ -118,7 +122,13 @@ void pm_g1(const cv::Mat& Lx, const cv::Mat& Ly, cv::Mat& dst, float k) {
* @param dst Output image
* @param k Contrast factor parameter
*/
void pm_g2(const cv::Mat &Lx, const cv::Mat& Ly, cv::Mat& dst, float k) {
void pm_g2(InputArray _Lx, InputArray _Ly, OutputArray _dst, float k) {
CV_INSTRUMENT_REGION()
_dst.create(_Lx.size(), _Lx.type());
Mat Lx = _Lx.getMat();
Mat Ly = _Ly.getMat();
Mat dst = _dst.getMat();
Size sz = Lx.size();
dst.create(sz, Lx.type());
@ -144,7 +154,11 @@ void pm_g2(const cv::Mat &Lx, const cv::Mat& Ly, cv::Mat& dst, float k) {
* Applications of nonlinear diffusion in image processing and computer vision,
* Proceedings of Algorithmy 2000
*/
void weickert_diffusivity(const cv::Mat& Lx, const cv::Mat& Ly, cv::Mat& dst, float k) {
void weickert_diffusivity(InputArray _Lx, InputArray _Ly, OutputArray _dst, float k) {
_dst.create(_Lx.size(), _Lx.type());
Mat Lx = _Lx.getMat();
Mat Ly = _Ly.getMat();
Mat dst = _dst.getMat();
Size sz = Lx.size();
float inv_k = 1.0f / (k*k);
@ -177,7 +191,11 @@ void weickert_diffusivity(const cv::Mat& Lx, const cv::Mat& Ly, cv::Mat& dst, fl
* Applications of nonlinear diffusion in image processing and computer vision,
* Proceedings of Algorithmy 2000
*/
void charbonnier_diffusivity(const cv::Mat& Lx, const cv::Mat& Ly, cv::Mat& dst, float k) {
void charbonnier_diffusivity(InputArray _Lx, InputArray _Ly, OutputArray _dst, float k) {
_dst.create(_Lx.size(), _Lx.type());
Mat Lx = _Lx.getMat();
Mat Ly = _Ly.getMat();
Mat dst = _dst.getMat();
Size sz = Lx.size();
float inv_k = 1.0f / (k*k);
@ -209,6 +227,7 @@ void charbonnier_diffusivity(const cv::Mat& Lx, const cv::Mat& Ly, cv::Mat& dst,
* @return k contrast factor
*/
float compute_k_percentile(const cv::Mat& img, float perc, float gscale, int nbins, int ksize_x, int ksize_y) {
CV_INSTRUMENT_REGION()
int nbin = 0, nelements = 0, nthreshold = 0, k = 0;
float kperc = 0.0, modg = 0.0;
@ -307,6 +326,7 @@ void compute_scharr_derivatives(const cv::Mat& src, cv::Mat& dst, int xorder, in
* @param scale_ Scale factor or derivative size
*/
void compute_derivative_kernels(cv::OutputArray _kx, cv::OutputArray _ky, int dx, int dy, int scale) {
CV_INSTRUMENT_REGION()
int ksize = 3 + 2 * (scale - 1);
@ -320,6 +340,7 @@ void compute_derivative_kernels(cv::OutputArray _kx, cv::OutputArray _ky, int dx
_ky.create(ksize, 1, CV_32F, -1, true);
Mat kx = _kx.getMat();
Mat ky = _ky.getMat();
std::vector<float> kerI;
float w = 10.0f / 3.0f;
float norm = 1.0f / (2.0f*scale*(w + 2.0f));
@ -327,7 +348,7 @@ void compute_derivative_kernels(cv::OutputArray _kx, cv::OutputArray _ky, int dx
for (int k = 0; k < 2; k++) {
Mat* kernel = k == 0 ? &kx : &ky;
int order = k == 0 ? dx : dy;
std::vector<float> kerI(ksize, 0.0f);
kerI.assign(ksize, 0.0f);
if (order == 0) {
kerI[0] = norm, kerI[ksize / 2] = w*norm, kerI[ksize - 1] = norm;
@ -403,6 +424,7 @@ private:
* dL_by_ds = d(c dL_by_dx)_by_dx + d(c dL_by_dy)_by_dy
*/
void nld_step_scalar(cv::Mat& Ld, const cv::Mat& c, cv::Mat& Lstep, float stepsize) {
CV_INSTRUMENT_REGION()
cv::parallel_for_(cv::Range(1, Lstep.rows - 1), Nld_Step_Scalar_Invoker(Ld, c, Lstep, stepsize), (double)Ld.total()/(1 << 16));
@ -472,7 +494,6 @@ void nld_step_scalar(cv::Mat& Ld, const cv::Mat& c, cv::Mat& Lstep, float stepsi
* @param dst Output image with half of the resolution of the input image
*/
void halfsample_image(const cv::Mat& src, cv::Mat& dst) {
// Make sure the destination image is of the right size
CV_Assert(src.cols / 2 == dst.cols);
CV_Assert(src.rows / 2 == dst.rows);

@ -21,10 +21,10 @@ namespace cv
void gaussian_2D_convolution(const cv::Mat& src, cv::Mat& dst, int ksize_x, int ksize_y, float sigma);
// Diffusivity functions
void pm_g1(const cv::Mat& Lx, const cv::Mat& Ly, cv::Mat& dst, float k);
void pm_g2(const cv::Mat& Lx, const cv::Mat& Ly, cv::Mat& dst, float k);
void weickert_diffusivity(const cv::Mat& Lx, const cv::Mat& Ly, cv::Mat& dst, float k);
void charbonnier_diffusivity(const cv::Mat& Lx, const cv::Mat& Ly, cv::Mat& dst, float k);
void pm_g1(InputArray Lx, InputArray Ly, OutputArray dst, float k);
void pm_g2(InputArray Lx, InputArray Ly, OutputArray dst, float k);
void weickert_diffusivity(InputArray Lx, InputArray Ly, OutputArray dst, float k);
void charbonnier_diffusivity(InputArray Lx, InputArray Ly, OutputArray dst, float k);
float compute_k_percentile(const cv::Mat& img, float perc, float gscale, int nbins, int ksize_x, int ksize_y);

@ -156,6 +156,8 @@ private:
void KeyPointsFilter::runByPixelsMask( std::vector<KeyPoint>& keypoints, const Mat& mask )
{
CV_INSTRUMENT_REGION()
if( mask.empty() )
return;

@ -0,0 +1,122 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html
/**
* @brief This function computes the Perona and Malik conductivity coefficient g2
* g2 = 1 / (1 + dL^2 / k^2)
* @param lx First order image derivative in X-direction (horizontal)
* @param ly First order image derivative in Y-direction (vertical)
* @param dst Output image
* @param k Contrast factor parameter
*/
__kernel void
AKAZE_pm_g2(__global const float* lx, __global const float* ly, __global float* dst,
float k, int size)
{
int i = get_global_id(0);
// OpenCV plays with dimensions so we need explicit check for this
if (!(i < size))
{
return;
}
const float k2inv = 1.0f / (k * k);
dst[i] = 1.0f / (1.0f + ((lx[i] * lx[i] + ly[i] * ly[i]) * k2inv));
}
__kernel void
AKAZE_nld_step_scalar(__global const float* lt, int lt_step, int lt_offset, int rows, int cols,
__global const float* lf, __global float* dst, float step_size)
{
/* The labeling scheme for this five star stencil:
[ a ]
[ -1 c +1 ]
[ b ]
*/
// column-first indexing
int i = get_global_id(1);
int j = get_global_id(0);
// OpenCV plays with dimensions so we need explicit check for this
if (!(i < rows && j < cols))
{
return;
}
// get row indexes
int a = (i - 1) * cols;
int c = (i ) * cols;
int b = (i + 1) * cols;
// compute stencil
float res = 0.0f;
if (i == 0) // first rows
{
if (j == 0 || j == (cols - 1))
{
res = 0.0f;
} else
{
res = (lf[c + j] + lf[c + j + 1])*(lt[c + j + 1] - lt[c + j]) +
(lf[c + j] + lf[c + j - 1])*(lt[c + j - 1] - lt[c + j]) +
(lf[c + j] + lf[b + j ])*(lt[b + j ] - lt[c + j]);
}
} else if (i == (rows - 1)) // last row
{
if (j == 0 || j == (cols - 1))
{
res = 0.0f;
} else
{
res = (lf[c + j] + lf[c + j + 1])*(lt[c + j + 1] - lt[c + j]) +
(lf[c + j] + lf[c + j - 1])*(lt[c + j - 1] - lt[c + j]) +
(lf[c + j] + lf[a + j ])*(lt[a + j ] - lt[c + j]);
}
} else // inner rows
{
if (j == 0) // first column
{
res = (lf[c + 0] + lf[c + 1])*(lt[c + 1] - lt[c + 0]) +
(lf[c + 0] + lf[b + 0])*(lt[b + 0] - lt[c + 0]) +
(lf[c + 0] + lf[a + 0])*(lt[a + 0] - lt[c + 0]);
} else if (j == (cols - 1)) // last column
{
res = (lf[c + j] + lf[c + j - 1])*(lt[c + j - 1] - lt[c + j]) +
(lf[c + j] + lf[b + j ])*(lt[b + j ] - lt[c + j]) +
(lf[c + j] + lf[a + j ])*(lt[a + j ] - lt[c + j]);
} else // inner stencil
{
res = (lf[c + j] + lf[c + j + 1])*(lt[c + j + 1] - lt[c + j]) +
(lf[c + j] + lf[c + j - 1])*(lt[c + j - 1] - lt[c + j]) +
(lf[c + j] + lf[b + j ])*(lt[b + j ] - lt[c + j]) +
(lf[c + j] + lf[a + j ])*(lt[a + j ] - lt[c + j]);
}
}
dst[c + j] = res * step_size;
}
/**
* @brief Compute determinant from hessians
* @details Compute Ldet by (Lxx.mul(Lyy) - Lxy.mul(Lxy)) * sigma
*
* @param lxx spatial derivates
* @param lxy spatial derivates
* @param lyy spatial derivates
* @param dst output determinant
* @param sigma determinant will be scaled by this sigma
*/
__kernel void
AKAZE_compute_determinant(__global const float* lxx, __global const float* lxy, __global const float* lyy,
__global float* dst, float sigma, int size)
{
int i = get_global_id(0);
// OpenCV plays with dimensions so we need explicit check for this
if (!(i < size))
{
return;
}
dst[i] = (lxx[i] * lyy[i] - lxy[i] * lxy[i]) * sigma;
}

@ -0,0 +1,47 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html
#include "test_precomp.hpp"
using namespace std;
using namespace cv;
TEST(Features2d_AKAZE, detect_and_compute_split)
{
Mat testImg(100, 100, CV_8U);
RNG rng(101);
rng.fill(testImg, RNG::UNIFORM, Scalar(0), Scalar(255), true);
Ptr<Feature2D> ext = AKAZE::create(AKAZE::DESCRIPTOR_MLDB, 0, 3, 0.001f, 1, 1, KAZE::DIFF_PM_G2);
vector<KeyPoint> detAndCompKps;
Mat desc;
ext->detectAndCompute(testImg, noArray(), detAndCompKps, desc);
vector<KeyPoint> detKps;
ext->detect(testImg, detKps);
ASSERT_EQ(detKps.size(), detAndCompKps.size());
for(size_t i = 0; i < detKps.size(); i++)
ASSERT_EQ(detKps[i].hash(), detAndCompKps[i].hash());
}
/**
* This test is here to guard propagation of NaNs that happens on this image. NaNs are guarded
* by debug asserts in AKAZE, which should fire for you if you are lucky.
*
* This test also reveals problems with uninitialized memory that happens only on this image.
* This is very hard to hit and depends a lot on particular allocator. Run this test in valgrind and check
* for uninitialized values if you think you are hitting this problem again.
*/
TEST(Features2d_AKAZE, uninitialized_and_nans)
{
Mat b1 = imread(cvtest::TS::ptr()->get_data_path() + "../stitching/b1.png");
ASSERT_FALSE(b1.empty());
vector<KeyPoint> keypoints;
Mat desc;
Ptr<Feature2D> akaze = AKAZE::create();
akaze->detectAndCompute(b1, noArray(), keypoints, desc);
}

@ -179,7 +179,7 @@ INSTANTIATE_TEST_CASE_P(AKAZE, DescriptorRotationInvariance,
Value(IMAGE_TSUKUBA, AKAZE::create(), AKAZE::create(), 0.99f));
INSTANTIATE_TEST_CASE_P(AKAZE_DESCRIPTOR_KAZE, DescriptorRotationInvariance,
Value(IMAGE_TSUKUBA, AKAZE::create(AKAZE::DESCRIPTOR_KAZE), AKAZE::create(AKAZE::DESCRIPTOR_KAZE), 0.002f));
Value(IMAGE_TSUKUBA, AKAZE::create(AKAZE::DESCRIPTOR_KAZE), AKAZE::create(AKAZE::DESCRIPTOR_KAZE), 0.99f));
/*
* Descriptor's scale invariance check
@ -189,4 +189,4 @@ INSTANTIATE_TEST_CASE_P(AKAZE, DescriptorScaleInvariance,
Value(IMAGE_BIKES, AKAZE::create(), AKAZE::create(), 0.6f));
INSTANTIATE_TEST_CASE_P(AKAZE_DESCRIPTOR_KAZE, DescriptorScaleInvariance,
Value(IMAGE_BIKES, AKAZE::create(AKAZE::DESCRIPTOR_KAZE), AKAZE::create(AKAZE::DESCRIPTOR_KAZE), 0.0004f));
Value(IMAGE_BIKES, AKAZE::create(AKAZE::DESCRIPTOR_KAZE), AKAZE::create(AKAZE::DESCRIPTOR_KAZE), 0.55f));

@ -307,24 +307,3 @@ TEST( Features2d_Detector_AKAZE_DESCRIPTOR_KAZE, regression )
CV_FeatureDetectorTest test( "detector-akaze-with-kaze-desc", AKAZE::create(AKAZE::DESCRIPTOR_KAZE) );
test.safe_run();
}
TEST( Features2d_Detector_AKAZE, detect_and_compute_split )
{
Mat testImg(100, 100, CV_8U);
RNG rng(101);
rng.fill(testImg, RNG::UNIFORM, Scalar(0), Scalar(255), true);
Ptr<Feature2D> ext = AKAZE::create(AKAZE::DESCRIPTOR_MLDB, 0, 3, 0.001f, 1, 1, KAZE::DIFF_PM_G2);
vector<KeyPoint> detAndCompKps;
Mat desc;
ext->detectAndCompute(testImg, noArray(), detAndCompKps, desc);
vector<KeyPoint> detKps;
ext->detect(testImg, detKps);
ASSERT_EQ(detKps.size(), detAndCompKps.size());
for(size_t i = 0; i < detKps.size(); i++)
ASSERT_EQ(detKps[i].hash(), detAndCompKps[i].hash());
}

@ -2386,7 +2386,7 @@ void cv::GaussianBlur( InputArray _src, OutputArray _dst, Size ksize,
if(sigma1 == 0 && sigma2 == 0 && tegra::useTegra() && tegra::gaussian(src, dst, ksize, borderType))
return;
#endif
bool useOpenCL = (_dst.isUMat() && _src.dims() <= 2 &&
bool useOpenCL = (ocl::useOpenCL() && _dst.isUMat() && _src.dims() <= 2 &&
((ksize.width == 3 && ksize.height == 3) ||
(ksize.width == 5 && ksize.height == 5)) &&
_src.rows() > ksize.height && _src.cols() > ksize.width);

Loading…
Cancel
Save