* Possibility to set more than one tree for the hierarchical KMeans (default is still 1 tree).
This particularly improves NN retrieval results with binary vectors, allowing better quality
compared to LSH for similar processing time when speed is the criterium.
* Add explanations on the FLANN's hierarchical KMeans for binary data.
* Fix trees parsing behavior in hierarchical_clustering_index:
Before, when maxCheck was reached in the first descent of a tree, time was still wasted parsing
the next trees till their best leaf, just to skip the points stored there.
Now we can choose either to keep this behavior, and so we skip parsing other trees after reaching
maxCheck, or we choose to do one descent in each tree, even if in one tree we reach maxCheck.
* Apply the same change to kdtree.
As each leaf contains only 1 point (unlike hierarchical_clustering), difference is visible if trees > maxCheck
* Add the new explore_all_trees parameters to miniflann
* Adapt the FlannBasedMatcher read_write test to the additional search parameter
* Adapt java tests to the additional parameter in SearchParams
* Fix the ABI dumps failure on SearchParams interface change
* Support of ctor calling another ctor of the class is only fully supported from C+11
Pev binary kmeans
* Ongoing work transposing kmeans clustering method for bitfields: the computeClustering method
Ongoing work transposing kmeans clustering method for bitfields: interface computeBitfieldClustering
Fix genericity of computeNodeStatistics
Ongoing work transposing kmeans clustering method for bitfields: adapt computeNodeStatistics()
Ongoing work transposing kmeans clustering method for bitfields: adapt findNN() method
Ongoing work transposing kmeans clustering method for bitfields: allow kmeans with Hamming distance
Ongoing work transposing kmeans clustering method for bitfields: adapt distances code
Ongoing work transposing kmeans clustering method for bitfields: adapt load/save code
Ongoing work transposing kmeans clustering method for bitfields: adapt kmeans hierarchicalClustring()
PivotType -> CentersType Renaming
Fix type casting for ARM SIMD implementation of Hamming
Fix warnings with Win32 compilation
Fix warnings with Win64 compilation
Fix wrong parenthesis position on rounding
* Ensure proper rounding when CentersType is integral
* Clean: replace C style asserts by CV_Assert and CV_DbgAssert
* Try fixing warning on Windows compilation
* Another way trying to fix warnings on Win
* Fixing warnings with some compilers:
Some compilers warn on systematic exit preventing to execute the code that follows.
This is why assert(0) that exits only in debug was working, but not CV_Assert or CV_Error
that exit both in release and debug, even if with different behavior.
In addition, other compilers complain when return 0 is removed from getKey(),
even if before we have a statement leading to systematic exit.
* Disable "unreachable code" warnings for Win compilers so we can use proper CV_Error
Before, when maxCheck was reached in the first descent of a tree, time was still wasted parsing
the next trees till their best leaves whose points were not used at all.
Argument "a" is of type ElementType* that is either int* or float*, while b was double*.
Mixing types prevents the possibility to use SSE or AVX instructions.
On implementation without SIMD instructions, this doesn't show any impact on performance.
* Clean: make the use of the indices array length consistent
Either we don't want this method to be used in the future for any other node
than the root node, and so we replace indices_length by size_ and remove it as
argument, or we want to be able to use it potentially for other nodes, and
so using size_ instead of indices_length would have lead to a bug.
* Fix: b was not an address
* Fix: transpose the Flann repo commit "Fixes in accum_dist methods" from Adil Ibragimov
Avoids trying to compute log(ratio) with ratio = 0
* Fix: transpose the Flann repo commit "result_set bugfix" from Jack Rae
* Fix Jack Rae commit as the initial i - 1 index was decremented before entering the loop body
* Clean: transpose the Flann repo commit "Updated comments in lsh_index" from Richard McPherson
* Fix: Transpose the Flann repo commit "Fixing unreachable code in lsh_table.h" from hypevr
* Fix warning the same way it was done in flann standalone repo
* Change the return value in case of unsupported type
Instead of using the current dimension for which we just got a big span,
we were computing Min and Max for the previous dimension stored in cutfeat
(and using 0 instead of the dimension indice for the very first dimension
with "span > (1-eps)max_span")
When running with >1 OpenCV thread, KMeans index generation was
non-deterministic because of a RWW race. Issue is resolved by removing
the offending logic from the parallel section.
All <arm_neon.h> includes in core/cv_cpu_dispatch.h are protected by an
ifndef __CUDACC__ to prevent attempting to use neon intrinsics when
compiling cuda kernels (.cu) -- this prevents hard errors such as
error: identifier "__builtin_neon_qi" is undefined
Add this same protection to flann/dist.h to fix compilation involving
flann.hpp.
* Add HPX backend for OpenCV implementation
Adds hpx backend for cv::parallel_for_() calls respecting the nstripes chunking parameter. C++ code for the backend is added to modules/core/parallel.cpp. Also, the necessary changes to cmake files are introduced.
Backend can operate in 2 versions (selectable by cmake build option WITH_HPX_STARTSTOP): hpx (runtime always on) and hpx_startstop (start and stop the backend for each cv::parallel_for_() call)
* WIP: Conditionally include hpx_main.hpp to tests in core module
Header hpx_main.hpp is included to both core/perf/perf_main.cpp and core/test/test_main.cpp.
The changes to cmake files for linking hpx library to above mentioned test executalbles are proposed but have issues.
* Add coditional iclusion of hpx_main.hpp to cpp cpu modules
* Remove start/stop version of hpx backend