* hopefully, eliminated compile warnings, errors, as well as failure in one test
* * fixed a few typos
* decreased buffer size in some cases
* added more optimal im2row branch in the case of 1x1 convolutions
* tuned fastConv to reduce the number of passes over arrays
backport of commit 77b01deb80
* added depth-wise convolution; gives ~20-30% performance improvement in MobileSSD networks
* hopefully, eliminated compile warnings, errors, as well as failure in one test
* * fixed a few typos
* decreased buffer size in some cases
* added more optimal im2row branch in the case of 1x1 convolutions
* tuned fastConv to reduce the number of passes over arrays
cuda4dnn: optimizations for swish, mish, sigmoid, region, resize based ops, transpose, identity-conv fusion
* bunch of optimizations
* more accurate implementation for mish
* Add Tengine support .
* Modify printf to CV_LOG_WARNING
* a few minor fixes in the code
* Renew Tengine version
* Add header file for CV_LOG_WARNING
* Add #ifdef HAVE_TENGINE in tengine_graph_convolution.cpp
* remove trailing whitespace
* Remove trailing whitespace
* Modify for compile problem
* Modify some code style error
* remove whitespace
* Move some code style problem
* test
* add ios limit and build problem
* Modified as alalek suggested
* Add cmake 2.8 support
* modify cmake 3.5.1 problem
* test and set BUILD_ANDROID_PROJECTS OFF
* remove some compile error
* remove some extra code in tengine
* close test.
* Test again
* disable android.
* delete ndk version judgement
* Remove setenv() call . and add License information
* Set tengine default OFF. Close test .
Co-authored-by: Vadim Pisarevsky <vadim.pisarevsky@gmail.com>
Support new IE API (#15184)
* Add support OpenVINO R2 for layers
* Add Core API
* Fix tests
* Fix expectNoFallbacksFromIE for ONNX nets
* Remove deprecated API
* Remove td
* Remove TargetDevice
* Fix Async
* Add test
* Fix detectMyriadX
* Fix test
* Fix warning
* dnn: Add a Vulkan based backend
This commit adds a new backend "DNN_BACKEND_VKCOM" and a
new target "DNN_TARGET_VULKAN". VKCOM means vulkan based
computation library.
This backend uses Vulkan API and SPIR-V shaders to do
the inference computation for layers. The layer types
that implemented in DNN_BACKEND_VKCOM include:
Conv, Concat, ReLU, LRN, PriorBox, Softmax, MaxPooling,
AvePooling, Permute
This is just a beginning work for Vulkan in OpenCV DNN,
more layer types will be supported and performance
tuning is on the way.
Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>
* dnn/vulkan: Add FindVulkan.cmake to detect Vulkan SDK
In order to build dnn with Vulkan support, need installing
Vulkan SDK and setting environment variable "VULKAN_SDK" and
add "-DWITH_VULKAN=ON" to cmake command.
You can download Vulkan SDK from:
https://vulkan.lunarg.com/sdk/home#linux
For how to install, see
https://vulkan.lunarg.com/doc/sdk/latest/linux/getting_started.htmlhttps://vulkan.lunarg.com/doc/sdk/latest/windows/getting_started.htmlhttps://vulkan.lunarg.com/doc/sdk/latest/mac/getting_started.html
respectively for linux, windows and mac.
To run the vulkan backend, also need installing mesa driver.
On Ubuntu, use this command 'sudo apt-get install mesa-vulkan-drivers'
To test, use command '$BUILD_DIR/bin/opencv_test_dnn --gtest_filter=*VkCom*'
Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>
* dnn/Vulkan: dynamically load Vulkan runtime
No compile-time dependency on Vulkan library.
If Vulkan runtime is unavailable, fallback to CPU path.
Use environment "OPENCL_VULKAN_RUNTIME" to specify path to your
own vulkan runtime library.
Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>
* dnn/Vulkan: Add a python script to compile GLSL shaders to SPIR-V shaders
The SPIR-V shaders are in format of text-based 32-bit hexadecimal
numbers, and inserted into .cpp files as unsigned int32 array.
* dnn/Vulkan: Put Vulkan headers into 3rdparty directory and some other fixes
Vulkan header files are copied from
https://github.com/KhronosGroup/Vulkan-Docs/tree/master/include/vulkan
to 3rdparty/include
Fix the Copyright declaration issue.
Refine OpenCVDetectVulkan.cmake
* dnn/Vulkan: Add vulkan backend tests into existing ones.
Also fixed some test failures.
- Don't use bool variable as uniform for shader
- Fix dispathed group number beyond max issue
- Bypass "group > 1" convolution. This should be support in future.
* dnn/Vulkan: Fix multiple initialization in one thread.
* Remove isIntel check from deep learning layers
* Remove fp16->fp32 fallbacks where it's not necessary
* Fix Kernel::run to prevent localsize > globalsize
Support asymmetric padding in pooling layer (#12519)
* Add Inception_V1 support in ONNX
* Add asymmetric padding in OpenCL and Inference engine
* Refactoring
* Remove a forward method in dnn::Layer
* Add a test
* Fix tests
* Mark multiple dnn::Layer::finalize methods as deprecated
* Replace back dnn's inputBlobs to vector of pointers
* Remove Layer::forward_fallback from CV_OCL_RUN scopes