The only thing besides the hwaccel that this function uses from
AVCodecHWConfigInternal is the pixel format, which should always match
the hwaccel one.
Will be useful in following commits.
Intel MediaSDK and oneVPL expect continuous allocation for data[i],
however there are mandatory padding bytes between data[i] and data[i+1].
when calling av_frame_get_buffer. This patch removes all extra padding
bytes.
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
The data copy is unnecessary for packed formats when frame width and
height are aligned
For example:
$ ffmpeg -f lavfi -i testsrc=size=1920x1088 -vf "format=yuyv422" -c:v hevc_qsv -f null -
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
The mfx implementation based on D3D11 is expected for an internal
session on Windows, however sometimes this implemntation is not
supported [1]. A fallback to the default mfx implementation is added in
this patch.
[1] https://github.com/intel/cartwheel-ffmpeg/issues/246
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
Fixes decoding packets containing split temporal units, as generated for example
by the av1_frame_split bsf.
Signed-off-by: James Almer <jamrial@gmail.com>
Fixes: signed integer overflow: -159584 * 5105950 cannot be represented in type 'int'
Fixes: 55165/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_BONK_fuzzer-5796023719297024
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
As this is an AV_CODEC_CAP_OTHER_THREADS decoder, threading is handled by the
underlying library. In this case, the frame delay is calculated by libdav1d
based on the values from avctx->thread_count and the private max_frame_delay
option.
Export said delay reported by the library in AVCodecContext.delay
Reviewed-by: Reviewed-by: Ronald S. Bultje <rsbultje@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
When using low-latency mode, it eliminates frame reordering
and follows a one-in-one-out encoding mode
Signed-off-by: xufuji456 <839789740@qq.com>
Signed-off-by: Rick Kern <kernrj@gmail.com>
Use the next I/P/B or start code as the end of current frame.
Before the patch, extension start code, user data start code,
sequence end code and so on are treated as the start of next
frame.
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
Not only this is information that relies on the concept of a sequence of
frames, which is completely out of place as a field in AVFrame, but there are
no known or intended uses of this field.
Signed-off-by: James Almer <jamrial@gmail.com>
Accept it and pass it through unchanged.
The standard requires that decoders ignore unknown metadata, and indeed
this is tested by some of the Argon coverage streams.
* take num_ticks_per_picture_minus_1 into account, since that is a part
of the framerate computation
* stop exporting num_ticks_per_picture_minus_1 into
AVCodecContext.ticks_per_frame, as that field is used for other
purposes (in conjunction with repeat_pict, which is not used at all by
av1)
For encoding, this field is entirely redundant with
AVCodecContext.framerate.
For decoding, this field is entirely redundant with
AV_CODEC_PROP_FIELDS.
H.264 and mpeg12 parsers need to be adjusted at the same time to stop
using the value of AVCodecContext.ticks_per_frame, because it is not set
correctly unless the codec has been opened. Previously this would result
in both the parser and lavf seeing the same incorrect value, which would
cancel out.
Updating lavf and not the parsers would result in correct value in lavf,
but the wrong one in parsers, which would break some tests.