* dnn: fix unaligned memory access crash on armv7
The getTensorContent function would return a Mat pointing to some
member of a Protobuf-encoded message. Protobuf does not make any
alignment guarantees, which results in a crash on armv7 when loading
models while bit 2 is set in /proc/cpu/alignment (or the relevant
kernel feature for alignment compatibility is disabled). Any read
attempt from the previously unaligned data member would send SIGBUS.
As workaround, this commit makes an aligned copy via existing clone
functionality in getTensorContent. The unsafe copy=false option is
removed. Unfortunately, a rather crude hack in PReLUSubgraph in fact
writes(!) to the Protobuf message. We limit ourselves to fixing the
alignment issues in this commit, and add getTensorContentRefUnaligned
to cover the write case with a safe memcpy. A FIXME marks the issue.
* dnn: reduce amount of .clone() calls
* dnn: update FIXME comment
Co-authored-by: Alexander Alekhin <alexander.a.alekhin@gmail.com>
Make the implementation of optimization in DNN adjustable to different vector sizes with RVV intrinsics.
* Update fastGEMM for multi VLEN.
* Update fastGEMM1T for multi VLEN.
* Update fastDepthwiseConv for multi VLEN.
* Update fastConv for multi VLEN.
* Replace malloc with cv::AutoBuffer.
dnn : int8 quantized layers support in onnx importer
* added quantized layers support in onnx importer
* added more cases in eltwise node, some more checks
* added tests for quantized nodes
* relax thresholds for failed tests, address review comments
* refactoring based on review comments
* added support for unsupported cases and pre-quantized resnet50 test
* relax thresholds due to int8 resize layer
Add ExpandDims layer of tf_importer.cpp
* Add ExpandDims to tf_importer.
* add -1 expand test case.
* Support different dimensions of input.
* Compatible with 5-dimensional NDHWC data
* Code align
* support 3-dim input.
* 3-dim bug fixed.
* fixing error of code format.
Add support for YOLOv4x-mish
* backport to 3.4 for supporting yolov4x-mish
* add YOLOv4x-mish test
* address review comments
Co-authored-by: Guo Xu <guoxu@1school.com.cn>
Add Normalize subgraph, fix Slice, Mul and Expand
* Add Normalize subgraph, support for starts<0 and axis<0 in Slice, Mul broadcasting in the middle and fix Expand's unsqueeze
* remove todos
* remove range-based for loop
* address review comments
* change >> to > > in template
* fix indexation
* fix expand that does nothing
* support PPSeg model for dnn module
* fixed README for CI
* add test case
* fixed bug
* deal with comments
* rm dnn_model_runner
* update test case
* fixed bug for testcase
* update testcase
Optimization of DNN using native RISC-V vector intrinsics.
* Use RVV to optimize fastGEMM (FP32) in DNN.
* Use RVV to optimize fastGEMM1T in DNN.
* Use RVV to optimize fastConv in DNN.
* Use RVV to optimize fastDepthwiseConv in DNN.
* Vectorize tails using vl.
* Use "vl" instead of scalar to handle small block in fastConv.
* Fix memory access out of bound in "fastGEMM1T".
* Remove setvl.
* Remove useless initialization.
* Use loop unrolling to handle tail part instead of switch.
Add Python's test for LSTM layer
* Add Python's test for LSTM layer
* Set different test threshold for FP16 target
* rename test to test_input_3d
Co-authored-by: Julie Bareeva <julia.bareeva@xperience.ai>
Support non-zero hidden state for LSTM
* fully support non-zero hidden state for LSTM
* check dims of hidden state for LSTM
* fix failed test Test_Model.TextRecognition
* add new tests for LSTM w/ non-zero hidden params
Co-authored-by: Julie Bareeva <julia.bareeva@xperience.ai>