Generalize drawtext utilities to make them usable in other filters.
This will be needed to introduce the QR code source and filter without
duplicating functionality.
The function lf_db_new was deprecated in lensfun 09dcd3e7ad in 2017
in favour of lf_db_create. The AVfilter was adjusted in 8b78eb312d
but the configure check wasn't changed.
lensfun removed lf_db_new declaration in lf 35c0017593 so configure
fails to find lensfun.
Rewrite the format parsing code to make it more easily generalizable. In
particular, `invert_formats` does not depend on the type of format list
passed to it, which allows me to re-use this helper in an upcoming
commit.
Slightly shortens the code, at the sole cost of doing several malloc
(ff_add_format) instead of a single malloc.
This is at odds with the YUV matrix negotiation API, in which such
dynamic changes in YUV encoding are no longer easily possible. There is
also no really strong motivating reason to do this, since the choice of
YUV matrix is essentially arbitrary and not actually related to the
Dolby Vision decoding process.
This filter will always accept any input format, even if the user sets
a specific in_range/in_color_matrix. This is to preserve status quo with
current behavior, where passing a specific in_color_matrix merely
overrides the incoming frames' attributes. (Use `vf_format` to force
a specific input range)
Because changing colorspace and color_range now requires reconfiguring
the link, we can lift sws_setColorspaceDetails out of scale_frame and
into config_props. (This will also get re-called if the input frame
properties change)
To allow adding proper negotiation, in particular, to fftools.
These values will simply be negotiated downstream for YUV formats, and
ignored otherwise.
Motivated by YUVJ removal. This change will allow full negotiation
between color ranges and matrices as needed. By default, all ranges and
matrices are marked as supported.
Because grayscale formats are currently handled very inconsistently (and
in particular, assumed as forced full-range by swscale), we exclude them
from negotiation altogether for the time being, to get this API merged.
After filter negotiation is available, we can relax the
grayscale-is-forced-jpeg restriction again, when it will be more
feasible to do so without breaking a million test cases.
Note that this commit updates one FATE test as a consequence of the
sanity fallback for non-YUV formats. In particular, the test case now
writes rgb24(pc, gbr/unspecified/unspecified) to the matroska file,
instead of rgb24(unspecified/unspecified/unspecified) as before.
Currently, the logic inside the FF_FILTER_FORMATS_QUERY_FUNC branch
prevents this code from running in the event that we have a filter with
a single video input and a single audio output, as the resulting audio
output link will not have its channel counts / samplerates correctly
initialized to their default values, possibly triggering a segfault
downstream.
An example of such a filter is vaf_spectrumsynth. Although this
particular filter already sets up the channel counts and samplerates as
part of the query function and therefore avoids triggering this bug, the
bug still exists in principle. (And importantly, sets a wrong precedent)
Even if a query func is set. This is safe to do, because
ff_default_query_formats is documented not to touch any filter lists
that were already set by the query func.
The reason to do this is because it allows us to extend
AVFilterFormatsConfig without having to touch every filter in existence.
An alternative implementation of this commit would be to explicitly add
a `ff_default_query_formats` call at the end of every query_formats
function, but that would end up functionally equivalent to this change
while touching a whole lot more code paths for no reason.
As a bonus, eliminates some code/logic duplication from this function.
For this kind of model, we can directly use its output as final result
just like ssd model. The difference is that it splits output into two
tensors. [x_min, y_min, x_max, y_max, confidence] and [lable_id].
Model example refer to: https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/person-detection-0106
Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>
Add dynamic outputs support. Some models don't have fixed output size.
Its size changes according to result. Now openvino can run these kinds of
models.
Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>
Fixes: out of array access:
Fixes: tickets/10745/poc12ffmpeg
Found-by: Li Zeyuan and Zeng Yunxiang.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes: out of array read
Fixes: tickets/10744/poc11ffmpeg
Found-by: Li Zeyuan and Zeng Yunxiang.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes: out of array access
Fixes: tickets/10743/poc10ffmpeg
Found-by: Zeng Yunxiang and Li Zeyuan
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes: out of array access
Fixes: tickets/10747/poc14ffmpeg
Found-by: Zeng Yunxiang and Song Jiaxuan
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The code works in steps of 2 lines and lacks support for odd height
Implementing odd height support is better but for now this fixes the
out of array access
Fixes: out of array access
Fixes: tickets/10702/poc6ffmpe
Found-by: Zeng Yunxiang
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes: out of array access
Fixes: 64081/clusterfuzz-testcase-minimized-ffmpeg_DEMUXER_fuzzer-6151006496620544
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes: signed integer overflow: 2314885530818453536 - -7412889664301817824 cannot be represented in type 'long'
Fixes: 64296/clusterfuzz-testcase-minimized-ffmpeg_dem_MOV_fuzzer-6304027146846208
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>