Fixed ReduceMean layer behaviour #25120
### Pull Request Readiness Checklist
See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request
- [x] I agree to contribute to the project under Apache 2 License.
- [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV
- [x] The PR is proposed to the proper branch
- [x] There is a reference to the original bug report and related work
- [] There is accuracy test, performance test and test data in opencv_extra repository, if applicable
Patch to opencv_extra has the same branch name.
- [ ] The feature is well documented and sample code can be built with the project CMake
a93c31e3c9/onnxruntime/core/providers/cpu/reduction/reduction_ops.cc (L433-L443)
Added int32, int64 support and type inference to dnn #24411
**Added a type inference to dnn similar to the shape inference, added int32 and int64 support.**
- Added getTypes method for layers that calculates layer outputs types and internals types from inputs types (Similar to getMemoryShapes). By default outputs and internals types = input[0] type
- Added type inference pipeline similar to shape inference pipeline. LayersShapes struct (that is used in shape inference pipeline) now contains both shapes and types
- All layers output blobs are now allocated using the calculated types from the type inference.
- Inputs and constants with int32 and int64 types are not automatically converted into float32 now.
- Added int32 and int64 support for all the layers with indexing and for all the layers required in tests.
Added int32 and int64 support for CUDA:
- Added host<->device data moving for int32 and int64
- Added int32 and int64 support for several layers (just slightly modified CUDA C++ templates)
Passed all the accuracy tests on CPU, OCL, OCL_FP16, CUDA, CUDA_FP16. (except RAFT model)
**CURRENT PROBLEMS**:
- ONNX parser always converts int64 constants and layers attributes to int32, so some models with int64 constants doesn't work (e.g. RAFT). The solution is to disable int64->int32 conversion and fix attributes reading in a lot of ONNX layers parsers (https://github.com/opencv/opencv/issues/25102)
- I didn't add type inference and int support to VULCAN, so it doesn't work at all now.
- Some layers don't support int yet, so some unknown models may not work.
**CURRENT WORKAROUNDS**:
- CPU arg_layer indides are implemented in int32 followed by a int32->int64 conversion (the master branch has the same workaround with int32->float conversion)
- CPU and OCL pooling_layer indices are implemented in float followed by a float->int64 conversion
- CPU gather_layer indices are implemented in int32, so int64 indices are converted to int32 (the master branch has the same workaround with float->int32 conversion)
**DISABLED TESTS**:
- RAFT model
**REMOVED TESTS**:
- Greater_input_dtype_int64 (because it doesn't fit ONNX rules, the whole test is just comparing float tensor with int constant)
**TODO IN NEXT PULL REQUESTS**:
- Add int64 support for ONNX parser
- Add int support for more layers
- Add int support for OCL (currently int layers just run on CPU)
- Add int tests
- Add int support for other backends
Primitive 1D Tests #24977
This PR is designed to add tests for 1D inputs for layer, which is required after introducing 1d support in 5.x. Currently tests are written for following layers:
- [x] `Add`, `Sub`
- [x] `Product`, `Div`
- [x] `Min`, `Max`
- [x] `Argmin`, `Argmax`
- [x] `Gather`
This list is to be extended for more layer such `gemm`, `conv` etc.
### Pull Request Readiness Checklist
See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request
- [x] I agree to contribute to the project under Apache 2 License.
- [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV
- [x] The PR is proposed to the proper branch
- [x] There is a reference to the original bug report and related work
- [x] There is accuracy test, performance test and test data in opencv_extra repository, if applicable
Patch to opencv_extra has the same branch name.
- [x] The feature is well documented and sample code can be built with the project CMake
Supporting protobuf v22 and later(with abseil-cpp/C++17) #24372
fix https://github.com/opencv/opencv/issues/24369
related https://github.com/opencv/opencv/issues/23791
1. This patch supports external protobuf v22 and later, it required abseil-cpp and c++17.
Even if the built-in protobuf is upgraded to v22 or later,
the dependency on abseil-cpp and the requirement for C++17 will continue.
2. Some test for caffe required patched protobuf, so this patch disable them.
This patch is tested by following libraries.
- Protobuf: /usr/local/lib/libprotobuf.so (4.24.4)
- abseil-cpp: YES (20230125)
### Pull Request Readiness Checklist
See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request
- [x] I agree to contribute to the project under Apache 2 License.
- [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV
- [x] The PR is proposed to the proper branch
- [x] There is a reference to the original bug report and related work
- [x] There is accuracy test, performance test and test data in opencv_extra repository, if applicable
Patch to opencv_extra has the same branch name.
- [x] The feature is well documented and sample code can be built with the project CMake
dnn: cleanup of halide backend for 5.x #24231
Merge with https://github.com/opencv/opencv_extra/pull/1092.
### Pull Request Readiness Checklist
See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request
- [x] I agree to contribute to the project under Apache 2 License.
- [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV
- [x] The PR is proposed to the proper branch
- [x] There is a reference to the original bug report and related work
- [x] There is accuracy test, performance test and test data in opencv_extra repository, if applicable
Patch to opencv_extra has the same branch name.
- [x] The feature is well documented and sample code can be built with the project CMake
TFLite models on different backends (tests and improvements) #24039
### Pull Request Readiness Checklist
* MaxUnpooling with OpenVINO
* Fully connected with transposed inputs/weights with OpenVINO
* Enable backends tests for TFLite (related to https://github.com/opencv/opencv/issues/23992#issuecomment-1640691722)
* Increase existing tests thresholds
See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request
- [x] I agree to contribute to the project under Apache 2 License.
- [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV
- [x] The PR is proposed to the proper branch
- [x] There is a reference to the original bug report and related work
- [x] There is accuracy test, performance test and test data in opencv_extra repository, if applicable
Patch to opencv_extra has the same branch name.
- [x] The feature is well documented and sample code can be built with the project CMake
Switch to new OpenVINO API after 2022.1 release
* Pass Layer_Test_Convolution_DLDT.Accuracy/0 test
* Pass test Test_Caffe_layers.Softmax
* Failed 136 tests
* Fix Concat. Failed 120 tests
* Custom nGraph ops. 19 failed tests
* Set and get properties from Core
* Read model from buffer
* Change MaxPooling layer output names. Restore reshape
* Cosmetic changes
* Cosmetic changes
* Override getOutputsInfo
* Fixes for OpenVINO < 2022.1
* Async inference for 2021.4 and less
* Compile model with config
* Fix serialize for 2022.1
* Asynchronous inference with 2022.1
* Handle 1d outputs
* Work with model with dynamic output shape
* Fixes with 1d output for old API
* Control outputs by nGraph function for all OpenVINO versions
* Refer inputs in PrePostProcessor by indices
* Fix cycled dependency between InfEngineNgraphNode and InfEngineNgraphNet.
Add InferRequest callback only for async inference. Do not capture InferRequest object.
* Fix tests thresholds
* Fix HETERO:GPU,CPU plugin issues with unsupported layer
Support non-zero hidden state for LSTM
* fully support non-zero hidden state for LSTM
* check dims of hidden state for LSTM
* fix failed test Test_Model.TextRecognition
* add new tests for LSTM w/ non-zero hidden params
Co-authored-by: Julie Bareeva <julia.bareeva@xperience.ai>
revise default proto to match the filename in documentations
fix a bug
beautify python codes
fix bug
beautify codes
add test samples with larger/smaller size
remove unless code
using bytearray without creating tmp file
remove useless codes
support eltwise sum with different number of input channels in CUDA backend
* add shortcut primitive
* add offsets in shortcut kernel
* skip tests involving more than two inputs
* remove redundant modulus operation
* support multiple inputs
* remove whole file indentation
* skip acc in0 trunc test if weighted
* use shortcut iff channels are unequal
Add lightweight IE hardware targets checks
nGraph: Concat with paddings
Enable more nGraph tests
Restore FP32->FP16 for GPU plugin of IE
try to fix buildbot
Use lightweight IE targets check only starts from R4
* enable tests for DNN_TARGET_CUDA_FP16
* disable deconvolution tests
* disable shortcut tests
* fix typos and some minor changes
* dnn(test): skip CUDA FP16 test too (run_pool_max)
dnn(eltwise): fix handling of different number of channels
* dnn(test): reproducer for Eltwise layer issue from PR16063
* dnn(eltwise): rework support for inputs with different channels
* dnn(eltwise): get rid of finalize(), variableChannels
* dnn(eltwise): update input sorting by number of channels
- do not swap inputs if number of channels are same after truncation
* dnn(test): skip "shortcut" with batch size 2 on MYRIAD targets