This allows using WRAPPED_AVFRAME encoders with loopback decoders in
order to connect multiple filtergraphs together.
Clear the flag in muxers, since lavf does not need it for anything and
it would change the results of framecrc FATE tests.
Encoder timebase is equal to the frame timebase, so does not need to be
passed separately.
Also, rename in_picture to frame, which is shorter and more accurate -
it always contains a frame, never a field.
These functions used to be passed directly to pthread_create(), which
required them to return void*. This is no longer the case, so they can
return a plain int.
Current callstack looks like this:
* ifilter_bind_ist() (filter) calls ist_filter_add() (demuxer);
* ist_filter_add() opens the decoder, and then calls
dec_add_filter() (decoder);
* dec_add_filter() calls ifilter_parameters_from_dec() (i.e. back into
the filtering code) in order to give post-avcodec_open2() parameters
to the filter.
This is unnecessarily complicated. Pass the parameters as follows
instead:
* dec_init() (which opens the decoder) returns post-avcodec_open2()
parameters to its caller (i.e. the demuxer) in a parameter-only
AVFrame
* the demuxer passes these parameters to the filter in
InputFilterOptions, together with other filter options
The first of these binds inputs of complex filtergraphs to demuxer
streams (with a misleading comment claiming it *creates* complex
filtergraphs).
The second ensures that all filtergraph outputs are connected to an
encoder.
Merge them into a single function, which simplifies the ffmpeg_filter
API, is shorter, and will also be useful in following commits.
Also, rename misleadingly-named init_input_filter() to
fg_complex_bind_input().
Rename dec_open to dec_init(), as it is more descriptive of its new
purpose.
Will be useful in following commits, which will add a new path for
opening decoders.
Treat it analogously to stream parameters like format/dimensions/etc.
This is functionally different from previous code in 2 ways:
* for non-CFR video, the frame timebase (set by the decoder) is used
rather than the demuxer timebase
* for sub2video, AV_TIME_BASE_Q is used, which is hardcoded by the
subtitle decoding API
These changes should avoid unnecessary and potentially lossy timestamp
conversions from decoder timebase into the demuxer one.
Changes the timebases used in sub2video tests.
Apparently it can happen that avfilter_graph_request_oldest() returns
EAGAIN, yet av_buffersrc_get_nb_failed_requests() returns 0 for every
input.
Works around #10795, though the root issue is most likely in the
scale2ref filter.
Components and pieces are side data specific fields and there's a variable
amount of them.
They also need to be identified in some form, so print a type too.
Reviewed-by: Stefano Sabatini <stefasab@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
It is only used by ffprobe (once) and ffplay (twice);
inlining it avoids including it unnecessarily into ffmpeg.
Reviewed-by: Stefano Sabatini <stefasab@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>