This already fixes a race in the vp9-encparams test. In this test,
side data is added to the current frame after having been decoded
(and therefore after ff_thread_finish_setup() has been called).
Yet the update_thread_context callback called ff_thread_ref_frame()
and therefore av_frame_ref() with this frame as source frame and
the ensuing read was unsynchronised with adding the side data,
i.e. there was a data race.
By switching to the ProgressFrame API the implicit av_frame_ref()
is removed and the race fixed except if this frame is later reused by
a show-existing-frame which uses an explicit av_frame_ref().
The vp9-encparams test does not cover this, so this commit
already fixes all the races in this test.
This decoder kept multiple references to the same ThreadFrames
in the same context and therefore had lots of implicit av_frame_ref()
even when decoding single-threaded. This incurred lots of small
allocations: When decoding an ordinary 10s video in single-threaded
mode the number of allocations reported by Valgrind went down
from 57,814 to 20,908; for 10 threads it went down from 84,223 to
21,901.
Reviewed-by: Anton Khirnov <anton@khirnov.net>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Avoids implicit av_frame_ref() and therefore allocations
and error checks. It also avoids explicitly allocating
the AVFrames (done implicitly when getting the buffer)
and it also allows to reuse the flushing code for freeing
the ProgressFrames.
Reviewed-by: Anton Khirnov <anton@khirnov.net>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Avoids implicit av_frame_ref() and therefore allocations
and error checks.
Reviewed-by: Anton Khirnov <anton@khirnov.net>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Frame-threaded decoders with inter-frame dependencies
use the ThreadFrame API for syncing. It works as follows:
During init each thread allocates an AVFrame for every
ThreadFrame.
Thread A reads the header of its packet and allocates
a buffer for an AVFrame with ff_thread_get_ext_buffer()
(which also allocates a small structure that is shared
with other references to this frame) and sets its fields,
including side data. Then said thread calls ff_thread_finish_setup().
From that moment onward it is not allowed to change any
of the AVFrame fields at all any more, but it may change
fields which are an indirection away, like the content of
AVFrame.data or already existing side data.
After thread A has called ff_thread_finish_setup(),
another thread (the user one) calls the codec's update_thread_context
callback which in turn calls ff_thread_ref_frame() which
calls av_frame_ref() which reads every field of A's
AVFrame; hence the above restriction on modifications
of the AVFrame (as any modification of the AVFrame by A after
ff_thread_finish_setup() would be a data race). Of course,
this av_frame_ref() also incurs allocations and therefore
needs to be checked. ff_thread_ref_frame() also references
the small structure used for communicating progress.
This av_frame_ref() makes it awkward to propagate values that
only become known during decoding to later threads (in case of
frame reordering or other mechanisms of delayed output (like
show-existing-frames) it's not the decoding thread, but a later
thread that returns the AVFrame). E.g. for VP9 when exporting video
encoding parameters as side data the number of blocks only
becomes known during decoding, so one can't allocate the side data
before ff_thread_finish_setup(). It is currently being done afterwards
and this leads to a data race in the vp9-encparams test when using
frame-threading. Returning decode_error_flags is also complicated
by this.
To perform this exchange a buffer shared between the references
is needed (notice that simply giving the later threads a pointer
to the original AVFrame does not work, because said AVFrame will
be reused lateron when thread A decodes the next packet given to it).
One could extend the buffer already used for progress for this
or use a new one (requiring yet another allocation), yet both
of these approaches have the drawback of being unnatural, ugly
and requiring quite a lot of ad-hoc code. E.g. in case of the VP9
side data mentioned above one could not simply use the helper
that allocates and adds the side data to an AVFrame in one go.
The ProgressFrame API meanwhile offers a different solution to all
of this. It is based around the idea that the most natural
shared object for sharing information about an AVFrame between
decoding threads is the AVFrame itself. To actually implement this
the AVFrame needs to be reference counted. This is achieved by
putting a (ownership) pointer into a shared (and opaque) structure
that is managed by the RefStruct API and which also contains
the stuff necessary for progress reporting.
The users get a pointer to this AVFrame with the understanding
that the owner may set all the fields until it has indicated
that it has finished decoding this AVFrame; then the users are
allowed to read everything. Every decoder may of course employ
a different contract than the one outlined above.
Given that there is no underlying av_frame_ref(), creating
references to a ProgressFrame can't fail. Only
ff_thread_progress_get_buffer() can fail, but given that
it will replace calls to ff_thread_get_ext_buffer() it is
at places where errors are already expected and properly
taken care of.
The ProgressFrames are empty (i.e. the AVFrame pointer is NULL
and the AVFrames are not allocated during init at all)
while not being in use; ff_thread_progress_get_buffer() both
sets up the actual ProgressFrame and already calls
ff_thread_get_buffer(). So instead of checking for
ThreadFrame.f->data[0] or ThreadFrame.f->buf[0] being NULL
for "this reference frame is non-existing" one should check for
ProgressFrame.f.
This also implies that one can only set AVFrame properties
after having allocated the buffer. This restriction is not deep:
if it becomes onerous for any codec, ff_thread_progress_get_buffer()
can be broken up. The user would then have to get a buffer
himself.
In order to avoid unnecessary allocations, the shared structure
is pooled, so that both the structure as well as the AVFrame
itself are reused. This means that there won't be lots of
unnecessary allocations in case of non-frame-threaded decoding.
It might even turn out to have fewer than the current code
(the current code allocates AVFrames for every DPB slot, but
these are often excessively large and not completely used;
the new code allocates them on demand). Pooling relies on the
reset function of the RefStruct pool API, it would be impossible
to implement with the AVBufferPool API.
Finally, ProgressFrames have no notion of owner; they are built
on top of the ThreadProgress API which also lacks such a concept.
Instead every ThreadProgress and every ProgressFrame contains
its own mutex and condition variable, making it completely independent
of pthread_frame.c. Just like the ThreadFrame API it is simply
presumed that only the actual owner/producer of a frame reports
progress on said frame.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The API is similar to the ThreadFrame API, with the exception
that it no longer has an included AVFrame and that it has its
own mutexes and condition variables which makes it more independent
of pthread_frame.c. One can wait on anything via a ThreadProgress.
One just has to ensure that the lifetime of the object containing
the ThreadProgress is long enough. This will typically be solved
by putting a ThreadProgress in a refcounted structure that is
shared between threads.
Reviewed-by: Anton Khirnov <anton@khirnov.net>
Reviewed-by: epirat07@gmail.com
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The native VVC decoder does not yet support quality/spatial/multiview
scalability. Bitstreams requiring this feature could cause crashes.
Patch fixes this by skipping NAL units which are not in the base layer,
warning the user while doing so.
Signed-off-by: Frank Plowman <post@frankplowman.com>
Signed-off-by: James Almer <jamrial@gmail.com>
Some files with no image items have them, and were working prior to the recent
HEIF parsing overhaul.
Ignore such boxes instead, to recover the old behavior.
Fixes a regression since d9fed9df2a.
Tested-by: Wu Jianhua <toqsxw@outlook.com>
Signed-off-by: James Almer <jamrial@gmail.com>
Only the last 256 samples of each frame are used;
the encoder currently uses a buffer for 1536 + 256 samples
whose first 256 samples contain are the last 256 samples
from the last frame and the next 1536 are the samples
of the current frame.
Yet since 238b2d4155 all the
DSP functions only need 256 contiguous samples and this can
be achieved by only retaining the last 256 samples of each
frame. Doing so saves 6KiB per channel.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
By default don't use the color properties from input frame as output
frame properties when performing HDR to SDR conversion
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
This codec supports FLAG_B_PICTURE_REFERENCES. We need to correctly fill
the reference_pic_flag with is_reference variable instead of 0 for B
frames.
Signed-off-by: Tong Wu <tong1.wu@intel.com>
libva2 doesn't require a fixed surface-array any more, so we may use
dynamic frame pool for decoding when libva2 is available, which allows a
downstream element stores more frames from VAAPI decoders and fixes the
error below:
$ ffmpeg -hwaccel vaapi -hwaccel_output_format vaapi \
-i input.mp4 -c:v hevc_vaapi -f null -
...
[h264 @ 0x557a075a1400] get_buffer() failed
[h264 @ 0x557a075a1400] thread_get_buffer() failed
[h264 @ 0x557a075a1400] decode_slice_header error
[h264 @ 0x557a075a1400] no frame!
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
This diff removes 4 unused ARMv7 NEON fixed-point DSP functions.
The function were originally moved here by 4958f35a2 (Dec 2013).
After 9e05421db (Jan 2021), as part of the refactor of the AC3
DSP to consistently use 32-bit sample format in the encoder, these
functions were removed from the DSP function table, but the ARMv7
implementations were kept.
Signed-off-by: Geoff Hill <geoff@geoffhill.org>
Fixes a regression due to the fact that the colorspace filter does
not use the new API introduced by 8c7934f73a.
The scale filter uses it since 45e09a3041, and the setparams
filter since 3bf80df3cc.
Example:
ffprobe -f lavfi yuvtestsrc,setparams=color_primaries=bt470bg:color_trc=
bt470bg:colorspace=bt470bg,colorspace=bt709:range=tv,scale,showinfo
Before:
color_range:unknown color_space:bt470bg ...
After:
color_range:tv color_space:bt709 ...
Signed-off-by: Nicolas Gaullier <nicolas.gaullier@cji.paris>
Signed-off-by: Niklas Haas <git@haasn.dev>
Also check for the number of streams and the AVCodecID generically
using FF_OFMT_FLAGs.
Reviewed-by: Stefano Sabatini <stefasab@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
When starting on a SEI recovery point close enough to the end of
the stream that draining starts before the recovery point's frame
is output, there can be non-recovered frames in the delayed picture
buffer that would currently cause the decoder to fail to output a
frame. This commit skips such frames and outputs the first recovered
frame, if there exists one.
Signed-off-by: arch1t3cht <arch1t3cht@gmail.com>
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
When decoding starts at a SEI recovery point very shortly before the
end of the video stream, there can be frames which are decoded before
the recovery point's frame is output and which will only be output once
the draining has started. Previously, these frames would never be set
as recovered. This commit copies the logic from h264_select_output_frame
to send_next_delayed_frame to properly mark such frames as recovered.
Fixes ticket #10936.
Signed-off-by: arch1t3cht <arch1t3cht@gmail.com>
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
sps_log2_ctu_size_minus5 is between 0 and 2, with 3 reserved for future
use. The VVC decoder allows sps_log2_ctu_size_minus5 to be 3, and so
ctb_size_y should be at least 16 bits to prevent overflows. An
alternative patch would leave sps_log2_ctu_size_minus5 as 8 bits and
disallow sps_log2_ctu_size_minus5 = 3.
Signed-off-by: Frank Plowman <post@frankplowman.com>
The first release of the CTS for AV1 decoding had incorrect
offsets for the OrderHints values.
The CTS will be fixed, and eventually, the drivers will be
updated to the proper spec-conforming behaviour, but we still
need to add a workaround as this will take months.
Only NVIDIA use these values at all, so limit the workaround
to only NVIDIA. Also, other vendors don't tend to provide accurate
CTS information.
This is needed by Vulkan. Constructing this can't be delegated to CBS
because packets might contain multiple frames (when non-shown frames are
present) but we need separate snapshots immediately before each frame
for the decoder.
reserve_index_space is a size, not an index.
Also refer to the variable in the description.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
When Split frame encoding is enabled, each input frame is partitioned into
horizontal strips which are encoded independently and simultaneously by
separate NVENCs, usually resulting in increased encoding speed compared to
single NVENC encoding.
Signed-off-by: Diego Felix de Souza <ddesouza@nvidia.com>
Signed-off-by: Timo Rothenpieler <timo@rothenpieler.org>
As we can read in ST 2086:
Values outside the specified ranges of luminance and chromaticity values
are not reserved by SMPTE, and can be used for purposes outside the
scope of this standard.
This is further acknowledged by ITU-T H.264 and ITU-T H.265. Which says
that values out of range are unknown or unspecified or specified by
other means not specified in this Specification.
Signed-off-by: Kacper Michajłow <kasper93@gmail.com>
Signed-off-by: Niklas Haas <git@haasn.dev>