* Common Canny parallelization added. TBB and single thread code removed. Final pass vectorized with SSE2 intrinsics.
* wrong #ifdef replaced with #if
* Merged to actual Canny version
* Merged common parallelized Canny with actual Canny implementation
* Remove 'Mutex *mutex' and pass 'Mutex mutex' from outside to parallelCanny
* Replaced extern Mutex with intern mutable Mutex.
When using OCL, the results of goodFeaturesToTrack() vary slightly from
run to run. This appears to be because the order of the results from
the findCorners kernel depends on thread execution and the sorting
function that is used at the end to rank the features only enforces are
partial sort order.
This does not materially impact the quality of the results, but it
makes it hard to build regression tests and generally introduces noise
into the system that should be avoided.
An easy fix is to change the sort function to enforce a total sort on
the features, even in cases where the match quality is exactly the same
for two features.
Add OpenCL support to linearPolar & logPolar.
The OpenCL code use float instead of double, so that it does not require
cl_khr_fp64 extension, with slight precision lost.
Add explicit conversion
Add explicit conversion from double to float to eliminate warning during
compilation.
Rewrite linearPolar & logPolar so that they do not depend on the
deprecated API CvMat. Issue 6377 is resolved in this way because the two
routines do not convert UMat to CvMat anymore.
When setting a wrong kernel size, the error message only tells the user that it
must be odd, however the conditions for rejection include values > 7 which must
be communicated. Without that, the message would be incorrect and confusing if
the user is unaware that only values 3, 5, 7 are accepted.
See the below code snippet:
while(l_counter != 0)
{
int mod = l_counter % LOCAL_TOTAL;
int pix_per_thr = l_counter / LOCAL_TOTAL + ((lid < mod) ? 1 : 0);
for (int i = 0; i < pix_per_thr; ++i)
{
int index = atomic_dec(&l_counter) - 1;
....
}
....
barrier(CLK_LOCAL_MEM_FENCE);
}
If we don't put a barrier before the for loop, then there is a possiblity
that some work item enter this loop but the others are not, the the l_counter
will be reduced in the for loop and may be changed to zero, and the other
work items may can't enter the while loop. If this happens, it breaks the
barrier's rule which requires all the work items reach the same barrier.
And it may hang the GPU depends on the implementation of opencl platform.
This issue is raised at:
https://github.com/Itseez/opencv/issues/5175
Signed-off-by: Zhigang Gong <zhigang.gong@linux.intel.com>