Merge pull request #23853 from fengyuentau:disable_fp16_warning

dnn: disable warning when loading a fp16 model #23853

### Pull Request Readiness Checklist

See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request

- [x] I agree to contribute to the project under Apache 2 License.
- [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV
- [x] The PR is proposed to the proper branch
- [ ] There is a reference to the original bug report and related work
- [ ] There is accuracy test, performance test and test data in opencv_extra repository, if applicable
      Patch to opencv_extra has the same branch name.
- [ ] The feature is well documented and sample code can be built with the project CMake
pull/23862/head
Yuantao Feng 1 year ago committed by GitHub
parent 21d79abb1f
commit aff420329c
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
  1. 2
      modules/dnn/src/onnx/onnx_graph_simplifier.cpp

@ -1125,7 +1125,7 @@ Mat getMatFromTensor(const opencv_onnx::TensorProto& tensor_proto)
else if (datatype == opencv_onnx::TensorProto_DataType_FLOAT16)
{
// FIXME, for now, we only load FP16 Tensor as FP32 Mat, full support for FP16 is required in the future.
CV_LOG_ONCE_WARNING(NULL, "DNN: load FP16 model as FP32 model, and it takes twice the FP16 RAM requirement.");
CV_LOG_ONCE_INFO(NULL, "DNN: load FP16 model as FP32 model, and it takes twice the FP16 RAM requirement.");
// ONNX saves float 16 data in two format: int32 and raw_data.
// Link: https://github.com/onnx/onnx/issues/4460#issuecomment-1224373746

Loading…
Cancel
Save