* videoio: add support for obsensor (Orbbec RGB-D Camera )
* obsensor: code format issues fixed and some code optimized
* obsensor: fix typo and format issues
* obsensor: fix crosses initialization error
[GSoC] New universal intrinsic backend for RVV
* Add new rvv backend (partially implemented).
* Modify the framework of Universal Intrinsic.
* Add CV_SIMD macro guards to current UI code.
* Use vlanes() instead of nlanes.
* Modify the UI test.
* Enable the new RVV (scalable) backend.
* Remove whitespace.
* Rename and some others modify.
* Update intrin.hpp but still not work on AVX/SSE
* Update conditional compilation macros.
* Use static variable for vlanes.
* Use max_nlanes for array defining.
Reimplementation of Element-wise layers with broadcasting support
* init
* semi-working initial version
* add small_vector
* wip
* remove smallvec
* add nary function
* replace auto with Mat in lambda expr used in transform
* uncomment asserts
* autobuffer shape_buf & step_buf
* fix a missing bracket
* fixed a missing addLayer in parseElementWise
* solve one-dimensional broadcast
* remove pre_broadcast_transform for the case of two constants; fix missing constBlobsExtraInfo when addConstant is called
* one autobuffer for step & shape
* temporal fix for the missing original dimension information
* fix parseUnsqueeze when it gets a 1d tensor constant
* support sum/mean/min/max with only one input
* reuse old code to handle cases of two non-constant inputs
* add condition to handle div & mul of two non-constant inputs
* use || instead of or
* remove trainling spaces
* enlarge buf in binary_forward to contain other buffer
* use autobuffer in nary_forward
* generate data randomly and add more cases for perf
* add op and, or & xor
* update perf_dnn
* remove some comments
* remove legacy; add two ONNX conformance tests in filter
* move from cpu_denylist to all_denylist
* adjust parsing for inputs>=2
Co-authored-by: fengyuentau <yuantao.feng@opencv.org.cn>
Add per_tensor_quantize to int8 quantize
* add per_tensor_quantize to dnn int8 module.
* change api flag from perTensor to perChannel, and recognize quantize type and onnx importer.
* change the default to hpp
DNN: Accelerating convolution
* Fast Conv of ARM, X86 and universal intrinsics.
* improve code style.
* error fixed.
* improve the License
* optimize memory allocated and Adjust the threshold.
* change FasterRCNN_vgg16 to 2GB memory.
-enable using -DWITH_WAYLAND=ON
-adapted from https://github.com/pfpacket/opencv-wayland
-using xdg_shell stable protocol
-overrides HAVE_QT if HAVE_WAYLAND and WITH_WAYLAND are set
Signed-off-by: Joel Winarske <joel.winarske@gmail.com>
Co-authored-by: Ryo Munakata <afpacket@gmail.com>
Replaced sprintf with safer snprintf
* Straightforward replacement of sprintf with safer snprintf
* Trickier replacement of sprintf with safer snprintf
Some functions were changed to take another parameter: the size of the buffer, so that they can pass that size on to snprintf.
Fix issue 22015, let Clip layer support 1-3 inputs
* Fix issue 22015.
Let layer Clip support 1-3 inputs.
* Resolve other problems caused by modifications
* Update onnx_importer.cpp
added extra checks to min/max handling in Clip
* Add assertions to check the size of the input
* Add test for clip with min and max initializers
* Separate test for "clip_init_min_max". Change the check method for input_size to provide a clearer message in case of problem.
* Add tests for clip with min or max initializers
* Change the implementation of getting input
Co-authored-by: Vadim Pisarevsky <vadim.pisarevsky@gmail.com>
Fix sampling for version multiplying factor
* reduce experimentalFrequencyElem and listFrequencyElem
* fix large resize
* fix tile in postIntermediate
* add getMinSideLen(), add corrected_index
* add test decode_regression_21929 author Kumataro, add test decode_regression_version_25
* objdetect: qrcode_encoder: fix to missing timing pattern
* objdetect: qrcode_encoder: Add SCOPED_TRACE() and replace CV_Assert() to ASSERT_EQ().
- Add SCOPED_TRACE() for version loop.
- Replace CV_Assert() to ASSERT_EQ().
- Rename expect_msg to msg.
Some GStreamer elements may produce buffers with very non
standard strides, offsets and/or even transport each plane
in different, non-contiguous pointers. This non-standard
layout is communicated via GstVideoMeta structures attached
to the buffers. Given this, when a GstVideoMeta is available,
one should parse the layout from it instead of generating
a generic one from the caps.
The GstVideoFrame utility does precisely this: if the buffer
contains a video meta, it uses that to fill the format and
memory layout. If there is no meta available, the layout is
inferred from the caps.
* Added support for 4B RGB V4L2 pixel formats
Added support for V4L2_PIX_FMT_XBGR32 and V4L2_PIX_FMT_ABGR32 pixel
formats.
* Added workaround for missing V4L2_PIX_FMT_ABGR32 and V4L2_PIX_FMT_XBGR32
defines