Broken after 7753a9d627. Apply only the
whitelist early, and the rest with a single call to av_opt_set_dict2()
with AV_OPT_SEARCH_CHILDREN, which should be equivalent to the original
behaviour.
Reported-by: Cameron Gutman <aicommander@gmail.com>
This replaces the myriad of existing lists in AVCodec by a unified API
call, allowing us to (ultimately) trim down the sizeof(AVCodec) quite
substantially, while also making this more trivially extensible.
In addition to the already covered lists, add two new entries for color
space and color range, mirroring the newly added negotiable fields in
libavfilter.
Once the deprecation period passes for the existing public fields, the
rough plan is to move the commonly used fields (such as
pix_fmt/sample_fmt) into FFCodec, possibly as a union of audio and video
configuration types, and then implement the rarely used fields with
custom callbacks.
This ensures that if a codec isn't on codec_whitelist, trying to open it
will not trigger ff_codec_close(), which could invalidate useful
information still present in the context.
Signed-off-by: Dale Curtis <dalecurtis@chromium.org>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Reorganize the code such that the frame threading code does not call the
decoders directly, but instead calls back into the generic decoding
code. This avoids duplicating the logic that wraps the decoder
invocation and allows receive_frame()-based decoders to use frame
threading.
Further work by Timo Rothenpieler <timo@rothenpieler.org>.
Frame-threaded decoders with inter-frame dependencies
use the ThreadFrame API for syncing. It works as follows:
During init each thread allocates an AVFrame for every
ThreadFrame.
Thread A reads the header of its packet and allocates
a buffer for an AVFrame with ff_thread_get_ext_buffer()
(which also allocates a small structure that is shared
with other references to this frame) and sets its fields,
including side data. Then said thread calls ff_thread_finish_setup().
From that moment onward it is not allowed to change any
of the AVFrame fields at all any more, but it may change
fields which are an indirection away, like the content of
AVFrame.data or already existing side data.
After thread A has called ff_thread_finish_setup(),
another thread (the user one) calls the codec's update_thread_context
callback which in turn calls ff_thread_ref_frame() which
calls av_frame_ref() which reads every field of A's
AVFrame; hence the above restriction on modifications
of the AVFrame (as any modification of the AVFrame by A after
ff_thread_finish_setup() would be a data race). Of course,
this av_frame_ref() also incurs allocations and therefore
needs to be checked. ff_thread_ref_frame() also references
the small structure used for communicating progress.
This av_frame_ref() makes it awkward to propagate values that
only become known during decoding to later threads (in case of
frame reordering or other mechanisms of delayed output (like
show-existing-frames) it's not the decoding thread, but a later
thread that returns the AVFrame). E.g. for VP9 when exporting video
encoding parameters as side data the number of blocks only
becomes known during decoding, so one can't allocate the side data
before ff_thread_finish_setup(). It is currently being done afterwards
and this leads to a data race in the vp9-encparams test when using
frame-threading. Returning decode_error_flags is also complicated
by this.
To perform this exchange a buffer shared between the references
is needed (notice that simply giving the later threads a pointer
to the original AVFrame does not work, because said AVFrame will
be reused lateron when thread A decodes the next packet given to it).
One could extend the buffer already used for progress for this
or use a new one (requiring yet another allocation), yet both
of these approaches have the drawback of being unnatural, ugly
and requiring quite a lot of ad-hoc code. E.g. in case of the VP9
side data mentioned above one could not simply use the helper
that allocates and adds the side data to an AVFrame in one go.
The ProgressFrame API meanwhile offers a different solution to all
of this. It is based around the idea that the most natural
shared object for sharing information about an AVFrame between
decoding threads is the AVFrame itself. To actually implement this
the AVFrame needs to be reference counted. This is achieved by
putting a (ownership) pointer into a shared (and opaque) structure
that is managed by the RefStruct API and which also contains
the stuff necessary for progress reporting.
The users get a pointer to this AVFrame with the understanding
that the owner may set all the fields until it has indicated
that it has finished decoding this AVFrame; then the users are
allowed to read everything. Every decoder may of course employ
a different contract than the one outlined above.
Given that there is no underlying av_frame_ref(), creating
references to a ProgressFrame can't fail. Only
ff_thread_progress_get_buffer() can fail, but given that
it will replace calls to ff_thread_get_ext_buffer() it is
at places where errors are already expected and properly
taken care of.
The ProgressFrames are empty (i.e. the AVFrame pointer is NULL
and the AVFrames are not allocated during init at all)
while not being in use; ff_thread_progress_get_buffer() both
sets up the actual ProgressFrame and already calls
ff_thread_get_buffer(). So instead of checking for
ThreadFrame.f->data[0] or ThreadFrame.f->buf[0] being NULL
for "this reference frame is non-existing" one should check for
ProgressFrame.f.
This also implies that one can only set AVFrame properties
after having allocated the buffer. This restriction is not deep:
if it becomes onerous for any codec, ff_thread_progress_get_buffer()
can be broken up. The user would then have to get a buffer
himself.
In order to avoid unnecessary allocations, the shared structure
is pooled, so that both the structure as well as the AVFrame
itself are reused. This means that there won't be lots of
unnecessary allocations in case of non-frame-threaded decoding.
It might even turn out to have fewer than the current code
(the current code allocates AVFrames for every DPB slot, but
these are often excessively large and not completely used;
the new code allocates them on demand). Pooling relies on the
reset function of the RefStruct pool API, it would be impossible
to implement with the AVBufferPool API.
Finally, ProgressFrames have no notion of owner; they are built
on top of the ThreadProgress API which also lacks such a concept.
Instead every ThreadProgress and every ProgressFrame contains
its own mutex and condition variable, making it completely independent
of pthread_frame.c. Just like the ThreadFrame API it is simply
presumed that only the actual owner/producer of a frame reports
progress on said frame.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The avctx passed to avcodec_string() may have unset coded dimensions, as is the
case when called by av_dump_format() where the streams had all the needed
information at the container level, and as such no frames were decoded
internally.
Signed-off-by: James Almer <jamrial@gmail.com>
Avoids allocations and frees and error checks for said allocations;
also avoids a few indirections and casts.
Reviewed-by: Anton Khirnov <anton@khirnov.net>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Instead, use forward declarations; and in order not to affect
any user include these headers for them, but not internally.
This has the advantage of removing implicit inclusions of these
headers from almost all files providing codecs.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This decoding flag makes decoders drop all frames after a parameter
change, but what exactly constitutes a parameter change is not well
defined and will typically depend on the exact use case.
This functionality then does not belong in libavcodec, but rather in
user code
The goal is to distinguish between APIs provided by the generic layer to
individual codecs and APIs internal to the generic layer.
Start by moving ff_{decode,encode}_receive_frame() and
ff_{decode,encode}_preinit() into this new header, as those functions
are called from generic code and should not be visible to individual
codecs.
Frame counters can overflow relatively easily (INT_MAX number of frames is
slightly more than 1 year for 60 fps content), so make sure we use 64 bit
values for them.
Also deprecate the old 32 bit frame_number attribute.
Signed-off-by: Marton Balint <cus@passwd.hu>
The idea behind last_pkt_props was to store the properties of the last packet
fed to the decoder. Any sort of queueing required by CODEC_CAP_DELAY decoders
that consume several packets before they start outputting frames should be done
by the decoders in question. An example of this is libdav1d.
This is required for the following commits that will fix last_pkt_props in
frame threading scenarios, as well as maintain its contents during flush.
This revers commit 022a12b306.
Signed-off-by: James Almer <jamrial@gmail.com>
This ensures that if AVCodecContext.channels or
AVCodecContext.channel_layout are set, AVCodecContext.ch_layout
has the equivalent values after this block.
(In case these values are set inconsistently, the consistency check
for ch_layout below will error out.)
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
In particular, check the provided channel layout for encoders
without AVCodec.ch_layouts set. This fixes an infinite loop
in the WavPack encoder (and maybe other issues in other encoders
as well) in case the channel count is zero.
Reviewed-by: James Almer <jamrial@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This block was scheduled for removal, which means that no validation would have
taken place after the old API was removed.
It was algo going to mistakenly remove an unrelated bits_per_coded_sample
check.
Signed-off-by: James Almer <jamrial@gmail.com>
At this point active_thread_type is set iff active_thread_type
is set to FF_THREAD_FRAME iff AVCodecInternal.frame_thread_encoder
is set.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Handling this in general code makes more sense than handling it in
individual codec files, because it would be a lot of unnecessary code
duplication for the plenty of formats that support exporting ICC
profiles (jpg, png, tiff, webp, jxl, ...).
encode.c and decode.c will be in charge of initializing this state as
needed, so we merely need to make sure to uninit it afterwards from the
common destructor path.
Signed-off-by: Niklas Haas <git@haasn.dev>
The amount of padding samples reported by containers take into account the
extended samplerate in HE-AAC.
Fixes ticket #9671.
Signed-off-by: James Almer <jamrial@gmail.com>
and remove FF_CODEC_CAP_INIT_THREADSAFE
All our native codecs are already init-threadsafe
(only wrappers for external libraries and hwaccels
are typically not marked as init-threadsafe yet),
so it is only natural for this to also be the default state.
Reviewed-by: Anton Khirnov <anton@khirnov.net>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The general decoding API uses bitstream filters and an AVFifo
and therefore AVCodecInternal contains pointers to an AVBSFContext
and to an AVFifo and lavc/internal.h includes lavc/bsf.h and
lavu/fifo.h.
Yet actually, only two files are supposed to use these, namely
avcodec.c and (mainly) decode.c. For all the other files,
it should be an opaque type that they should not touch and that
they need not know anything about. This can be achieved by not
including these headers and using the structs instead of the
corresponding typedefs.
This also forces translation units that really use the BSF
and the FIFO APIs themselves to include the relevant headers
directly instead of relying on indirect inclusions (up until now,
even avcodec.c and decode.c relied on fifo.h to be included
by internal.h).
Of course, it also avoids unnecessary rebuilds when bsf.h or fifo.h
change.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This avoids having to rebuild big files every time FFMPEG_VERSION
changes (which it does with every commit).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
It works, but it is not documented to work.
Reviewed-by: Anton Khirnov <anton@khirnov.net>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Up until now, codec.h contains both public and private parts
of AVCodec. This exposes the internals of AVCodec to users
and leads them into the temptation of actually using them
and forces us to forward-declare structures and types that
users can't use at all.
This commit changes this by adding a new structure FFCodec to
codec_internal.h that extends AVCodec, i.e. contains the public
AVCodec as first member; the private fields of AVCodec are moved
to this structure, leaving codec.h clean.
Reviewed-by: Anton Khirnov <anton@khirnov.net>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Also move FF_CODEC_TAGS_END as well as struct AVCodecDefault.
This reduces the amount of files that have to include internal.h
(which comes with quite a lot of indirect inclusions), as e.g.
most encoders don't need it. It is furthemore in preparation
for moving the private part of AVCodec out of the public codec.h.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>