* the first commit in the merged dnn: convert some public API from Blob's to Mat's
* temporarily or permantently removed OpenCL optimizations, which are not always stable nor usually very efficient; we'll likely use Halide instead
* got rid of Blob and BlobShape completely; use cv::Mat and std::vector<int> instead
* fixed a few compile errors
* got rid of separate .hpp files with layer declarations; instead, put everything into the respective .cpp files
* normalized all the layers' constructors; we concentrate on loading deep networks layers from files instead of constructing them from scratch, so we retained only SomeLayer::SomeLayer(const LayerParams& params); constructors
* fixed sample compilation
* suppress doxygen warnings
* trying to fix python bindings generation for DNN module
* temporarily disable python bindings while we refactor the module
* fix win32/win64 compile errors; remove trailing whitespaces
* fix win32/win64 compile errors; remove trailing whitespaces
Correcting bgsegm module descriptions. (#493)
* Correcting bgsegm module descriptions. The algorithm implementation doesn't have multi target tracking as mentioned in original paper. it only does foreground/background segmentation.
* Removing opencv_ from heading
Removing opencv_ from description
new corner refinement method :: using the contour-lines (#973)
* doCornerRefinement to CornerRefinementMethod :: detected contours points are used to detect the corners
* some little corrections
* samples edited
* documented :)
* tabs corrected
* Docu corrections
* refinement for all candidates
* refinement for all candidates :: copy paste error corrected
* comment
aruco: make public the getBoardObjectAndImagePoints function (#1108)
* Made the private static getBoardObjectAndImagePoints function public to be used for calibration.
* Switched the arguments detectedIds and detectedCorners, and objPoints and imgPoints on getBoardObjectandImagePoints function for consistency with calibrateCamera and calibrateCameraAruco functions.
* Added the flag CV_EXPORTS_W to the getBoardObjectAndImagePoints function.
- made some of dependencies explicit
- removed dependencies to highgui and some other modules where possible
- modified some samples to build without modules
This patch adds ocl kernels to accelerate Dense Inverse Search
based optical flow algorithm, it acclerates 3 parts in the algorithm,
including 1) Structure tensor elements compute, 2) Patch inverse search,
3) Densification compute.
Perf and accuracy test are also added. The perf test shows it is 30%
faster than the current implementation.
Signed-off-by: Li Peng <peng.li@intel.com>