These functions used to be passed directly to pthread_create(), which
required them to return void*. This is no longer the case, so they can
return a plain int.
Current callstack looks like this:
* ifilter_bind_ist() (filter) calls ist_filter_add() (demuxer);
* ist_filter_add() opens the decoder, and then calls
dec_add_filter() (decoder);
* dec_add_filter() calls ifilter_parameters_from_dec() (i.e. back into
the filtering code) in order to give post-avcodec_open2() parameters
to the filter.
This is unnecessarily complicated. Pass the parameters as follows
instead:
* dec_init() (which opens the decoder) returns post-avcodec_open2()
parameters to its caller (i.e. the demuxer) in a parameter-only
AVFrame
* the demuxer passes these parameters to the filter in
InputFilterOptions, together with other filter options
Rename dec_open to dec_init(), as it is more descriptive of its new
purpose.
Will be useful in following commits, which will add a new path for
opening decoders.
Previously, the demuxer would register decoder with the scheduler, using
InputStream as opaque, and pass the scheduling index to the decoder.
Now the registration is done by the decoder itself, using DecoderPriv as
opaque, and the scheduling index is returned to demuxer from dec_open().
decoder_thread() then no longer needs to be accessed from outside of
ffmpeg_dec and can be made static.
It is done based on demuxer information, so that is the more appropriate
place for this code.
This is a step towards decoupling Decoder and InputStream.
* as this decision is based on demuxing information, move it from the
decoder to the demuxer
* as the issue being addressed is latency added by frame threading, we
only need to disable frame threading, not all threading
Similar to what is currently done for other components, e.g. (de)muxers.
There is nothing in the public part currently, but that will change in
future commits.
Change the main loop and every component (demuxers, decoders, filters,
encoders, muxers) to use the previously added transcode scheduler. Every
instance of every such component was already running in a separate
thread, but now they can actually run in parallel.
Changes the results of ffmpeg-fix_sub_duration_heartbeat - tested by
JEEB to be more correct and deterministic.
See the comment block at the top of fftools/ffmpeg_sched.h for more
details on what this scheduler is for.
This commit adds the scheduling code itself, along with minimal
integration with the rest of the program:
* allocating and freeing the scheduler
* passing it throughout the call stack in order to register the
individual components (demuxers/decoders/filtergraphs/encoders/muxers)
with the scheduler
The scheduler is not actually used as of this commit, so it should not
result in any change in behavior. That will change in future commits.
As previously for decoding, this is merely "scaffolding" for moving to a
fully threaded architecture and does not yet make filtering truly
parallel - the main thread will currently wait for the filtering thread
to finish its work before continuing. That will change in future commits
after encoders are also moved to threads and a thread-aware scheduler is
added.
Current code tracks min/max pts for each stream separately; then when
the file ends it combines them with last frame's duration to compute the
total duration of each stream; finally it selects the longest of those
durations as the file duration.
This is incorrect - the total file duration is the largest timestamp
difference between any frames, regardless of the stream.
Also change the way the last frame information is reported from decoders
to the muxer - previously it would be just the last frame's duration,
now the end timestamp is sent, which is simpler.
Changes the result of the fate-ffmpeg-streamloop-transcode-av test,
where the timestamps are shifted slightly forward. Note that the
matroska demuxer does not return the first audio packet after seeking
(due to buggy interaction betwen the generic code and the demuxer), so
there is a gap in audio.
In this case any timestamps are guessed by compute_pkt_fields() in
libavformat. Since we are decoding the stream, we have more accurate
information from the decoder and do not need any guesses.
Eliminates spurious PTS gaps in a number of FATE tests.
Also avoids dropping the majority of frames in fate-dirac*
It is badly named (should have been -top_field_first, or at least -tff),
underdocumented and underspecified, and (most importantly) entirely
redundant with the setfield filter.
It now contains data from multiple sources, so group those items that
always come from the decoder. Also, initialize them to invalid values,
so that frames that did not originate from a decoder can be
distinguished.