FFmpeg-qsv decoder reinit codec when width and height change, but there
are not only resolution change need to reinit codec. I change it to use
return value from DecodeFrameAsync() to decide whether decoder need to
be reinitialized.
Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Signed-off-by: Guangxin Xu <guangxin.xu@intel.com>
Commit e050959103 implemented passing in
modifiers by using the PRIME_2 memory type, which only exists in v2 of
the library.
To still support v1 of the library, conditionally compile using
VA_CHECK_VERSION() for both the new code and the old code before
the commit.
Note PRIME_2 memory was introduced from VA-API 1.1, so use
VA_CHECK_VERSION(1, 1, 0) instead of VA_CHECK_VERSION(2, 0, 0) (Haihao)
Signed-off-by: Ingo Brückl <ib@wupperonline.de>
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
In case the BSF has not been drained before flushing/closing,
the context's next_frame might be set; yet it is not freed
in flush or close. The former only zeroes it (which automatically
causes a leak in case it was set). So do this when closing
and flushing.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Since the switch to the new FIFO API in commit
ea511196a6, the FIFO is always
grown by the amount of data intended to be written into it
even in case the FIFO has enough free space. Fix this by
only growing the FIFO if needed and then only by the amount that is
actually needed.
The allocation errors that resulted from this uncovered another bug:
The context is left in an inconsistent state in case the FIFO can't
be grown, because the FIFO does not contain as much data as the sizes
contained in the PacketDesc list claim. This led to an infinite loop
in output_packet() (called from mpeg_mux_end()).
Fix this by growing the FIFO before adding a new PacketDesc element,
thereby preventing the context from becoming inconsistent.
Reported-by: Nicolas Gaullier <nicolas.gaullier@cji.paris>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This is possible, because every given FFCodec has to implement
exactly one of these. Doing so decreases sizeof(FFCodec) and
therefore decreases the size of the binary.
Notice that in case of position-independent code the decrease
is in .data.rel.ro, so that this translates to decreased
memory consumption.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This increases type-safety by avoiding conversions from/through void*.
It also avoids the boilerplate "AVFrame *frame = data;" line
for non-subtitle decoders.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This increases type-safety by avoiding conversions from/through void*.
It also avoids the boilerplate "AVSubtitle *sub = data;" line
for subtitle decoders. Its only downside is that it increases
sizeof(FFCodec), yet this can be more than offset lateron.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
On output streams where a multichannel stream needs to be stored as one track
per channel, each track will have a channel layout describing the position of
the channel they contain. For the track with front center, the mov muxer was
using the mov layout "mono" instead of the label for the front center position.
Since our channel layout API considers front center == mono, we need to do some
heuristics. To achieve this, we make sure all audio tracks contain streams with
a single channel, and only one of them is front center. In that case, we write
the front center label instead of signaling mono layout.
Fixes the last part of ticket #2865
Signed-off-by: James Almer <jamrial@gmail.com>
The inputs are unused except for this computation so wraparound
does not give an attacker any extra values as they are already fully
controlled
Fixes: signed integer overflow: 0 - -2147483648 cannot be represented in type 'int'
Fixes: 45820/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_EXR_fuzzer-5766159019933696
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes: signed integer overflow: -128275513086 * -76056576 cannot be represented in type 'long'
Fixes: 45818/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_DIRAC_fuzzer-5129799149944832
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes: signed integer overflow: -101 * 71041254 cannot be represented in type 'int'
Fixes: 45938/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_TAK_fuzzer-4687974320701440
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes: signed integer overflow: -2146549696 - 3923884 cannot be represented in type 'int'
Fixes: 45907/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_APE_fuzzer-5992380584558592
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This makes the filters match their declaration in
libavfilter/allfilters.c; the earlier discrepancy was btw UB.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Calculate Spatial Info (SI) and Temporal Info (TI) scores for a video, as defined
in ITU-T P.910: Subjective video quality assessment methods for multimedia
applications.
It is currently a "Picture", an mpegvideo-specific type
that has a lot of baggage, all of which is unnecessary
for new_picture, because only its embedded AVFrame
is ever used. So just use an ordinary AVFrame.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
In the aforementioned case mpegvideo_enc.c calls
ff_mjpeg_encode_stuffing() at the end of every line which
pads the output to byte-alignment and escapes it;
yet it does not write the restart-markers (and also not
the DRI marker when writing the header) and so the output files
are broken.
Fix this by writing these markers depending upon the number of
slices and not the number of threads in use; this also makes
the output of the encoder reproducible given a slice count
and is therefore important if encoder tests that actually use
-threads auto are added in the future.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Our code for writing optimal huffman tables is incompatible
with using multiple slices and hence commit
884506dfe2 that implemented this
also added an assert that slice_context_count is always 1.
Yet this was always wrong: a) The MJPEG-encoder has (and had)
the AV_CODEC_CAP_SLICE_THREADS capability, so asserting that
it always uses one slice context is incorrect.
b) This commit did not add any proper checks that ensured that
optimal huffman tables are never used together with multiple slices.
This only happened with 03eb0515c1.
c) This assert is at the wrong place: ff_mjpeg_encode_init() is
called before the actual slice_context_count is set. This is
the reason why this assert was never triggered.
Therefore this commit removes this assert.
Also remove an assert from the SpeedHQ encoder sharing b) and c).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
One can use slices without slice-threading. The results for
mpegvideo-encoders are abysmal: AMV, SpeedHQ, H.263, RV10, RV20,
MSMPEG4v2, MSMPEG4v3 and WMV1 produce broken files.
WMV2 meanwhile expects the MpegEncContext given to ff_wmv2_encode_mb()
to be at the beginning of a Wmv2Context (a structure that this encoder
shares with the WMV2 decoder), yet this is only true for the
main context and not for the slice contexts, leading to segfaults.
SpeedHQ additionally triggers an av_assert2, because it is not
byte-aligned at a position where it ought to be byte-aligned.
Given that no codec not supporting slice threading works this commit
disallows using slices unless the encoder supports slice threading.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
checkasm benchmarks on 1.5 GHz Cortex-A72 are as follows.
vc1dsp.vc1_unescape_buffer_c: 918624.7
vc1dsp.vc1_unescape_buffer_neon: 142958.0
Signed-off-by: Ben Avison <bavison@riscosopen.org>
Signed-off-by: Martin Storsjö <martin@martin.st>
checkasm benchmarks on 1.5 GHz Cortex-A72 are as follows.
vc1dsp.vc1_unescape_buffer_c: 655617.7
vc1dsp.vc1_unescape_buffer_neon: 118237.0
Signed-off-by: Ben Avison <bavison@riscosopen.org>
Signed-off-by: Martin Storsjö <martin@martin.st>
checkasm benchmarks on 1.5 GHz Cortex-A72 are as follows. Note that the C
version can still outperform the NEON version in specific cases. The balance
between different code paths is stream-dependent, but in practice the best
case happens about 5% of the time, the worst case happens about 40% of the
time, and the complexity of the remaining cases fall somewhere in between.
Therefore, taking the average of the best and worst case timings is
probably a conservative estimate of the degree by which the NEON code
improves performance.
vc1dsp.vc1_h_loop_filter4_bestcase_c: 19.0
vc1dsp.vc1_h_loop_filter4_bestcase_neon: 48.5
vc1dsp.vc1_h_loop_filter4_worstcase_c: 144.7
vc1dsp.vc1_h_loop_filter4_worstcase_neon: 76.2
vc1dsp.vc1_h_loop_filter8_bestcase_c: 41.0
vc1dsp.vc1_h_loop_filter8_bestcase_neon: 75.0
vc1dsp.vc1_h_loop_filter8_worstcase_c: 294.0
vc1dsp.vc1_h_loop_filter8_worstcase_neon: 102.7
vc1dsp.vc1_h_loop_filter16_bestcase_c: 54.7
vc1dsp.vc1_h_loop_filter16_bestcase_neon: 130.0
vc1dsp.vc1_h_loop_filter16_worstcase_c: 569.7
vc1dsp.vc1_h_loop_filter16_worstcase_neon: 186.7
vc1dsp.vc1_v_loop_filter4_bestcase_c: 20.2
vc1dsp.vc1_v_loop_filter4_bestcase_neon: 47.2
vc1dsp.vc1_v_loop_filter4_worstcase_c: 164.2
vc1dsp.vc1_v_loop_filter4_worstcase_neon: 68.5
vc1dsp.vc1_v_loop_filter8_bestcase_c: 43.5
vc1dsp.vc1_v_loop_filter8_bestcase_neon: 55.2
vc1dsp.vc1_v_loop_filter8_worstcase_c: 316.2
vc1dsp.vc1_v_loop_filter8_worstcase_neon: 72.7
vc1dsp.vc1_v_loop_filter16_bestcase_c: 62.2
vc1dsp.vc1_v_loop_filter16_bestcase_neon: 103.7
vc1dsp.vc1_v_loop_filter16_worstcase_c: 646.5
vc1dsp.vc1_v_loop_filter16_worstcase_neon: 110.7
Signed-off-by: Ben Avison <bavison@riscosopen.org>
Signed-off-by: Martin Storsjö <martin@martin.st>