Change the main loop and every component (demuxers, decoders, filters,
encoders, muxers) to use the previously added transcode scheduler. Every
instance of every such component was already running in a separate
thread, but now they can actually run in parallel.
Changes the results of ffmpeg-fix_sub_duration_heartbeat - tested by
JEEB to be more correct and deterministic.
See the comment block at the top of fftools/ffmpeg_sched.h for more
details on what this scheduler is for.
This commit adds the scheduling code itself, along with minimal
integration with the rest of the program:
* allocating and freeing the scheduler
* passing it throughout the call stack in order to register the
individual components (demuxers/decoders/filtergraphs/encoders/muxers)
with the scheduler
The scheduler is not actually used as of this commit, so it should not
result in any change in behavior. That will change in future commits.
As for the analogous decoding change, this is only a preparatory step to
a fully threaded architecture and does not yet make encoding truly
parallel. The main thread will currently submit a frame and wait until
it has been fully processed by the encoder before moving on. That will
change in future commits after filters are moved to threads and a
thread-aware scheduler is added.
This code suffers from a known issue - if an encoder with a sync queue
receives EOF it will terminate after processing everything it currently
has, even though the sync queue might still be triggered by other
threads. That will be fixed in following commits.
Its function is analogous to that of the fps filter, so filtering is a
more appropriate place for this.
The main practical reason for this move is that it places the encoding
sync queue right at the boundary between filters and encoders. This will
be important when switching to threaded scheduling, as the sync queue
involves multiple streams and will thus need to do nontrivial
inter-thread synchronization.
In addition to framerate conversion, the closely-related
* encoder timebase selection
* applying the start_time offset
are also moved to filtering.
That field is used by the framerate code to track whether any output has
been generated for the last input frame(*). Its use in the last
invocation of print_report() is meant to account for the very last
filtered frame being dropped in the number of dropped frames printed in
the log. However, that is a highly inappropriate place to do so, as it
makes assumptions about vsync logic in completely unrelated code. Move
the increment to encoder flush instead.
(*) the name is misleading, as the input frame has not yet been dropped
and may still be output in the future
Always use the functionality of the latter, which makes more sense as it
avoids losing keyframes due to vsync code dropping frames.
Deprecate the 'source_no_drop' value, as it is now redundant.
Unlike the 'source' mode, which preserves source keyframe-marking as-is,
the 'source_no_drop' mode attempts to keep track of keyframes dropped by
framerate conversion and mark the next output frame as key in such
cases. However,
* c75be06148 broke this functionality entirely, and made it equivalent
to 'source'
* even before it would only work when the frame immediately following
the dropped keyframe is preserved and not dropped as well
Also, drop a redundant check for 'frame' in setting dropped_keyframe, as
it is redundant with the check on the above line.
It is badly named (should have been -top_field_first, or at least -tff),
underdocumented and underspecified, and (most importantly) entirely
redundant with the setfield filter.
Merge three blocks with slightly inconsistent checks into one, treating
encoder input as interlaced when either:
* at least one of ilme/ildct flags is set
* the first frame in the stream is marked as interlaced
* the user specified the -top option
Stop modifying the frame passed to enc_open().
When no output video framerate is specified by the user with -r or can
be inferred from the filtergraph, encoder setup will arbitrarily decide
that the framerate is 25fps. However, making up any framerate value for
VFR encoding is at best unnecessary.
Changes the results of the sub2video tests, where the input timebase is
now used instead of 1/25.
Mainly this fixes handling special values of -enc_time_base ('demux' or
'filter') for audio. It also prints a warning if -enc_time_base is
specified for subtitles, instead of ignoring it silently (current
subtitle encoding API only works with AV_TIME_BASE_Q).
This function converts packet timestamps from the input stream timebase
to OutputStream.mux_timebase, which may or may not be equal to the
actual output AVStream timebase (and even when it is, this may not
always be the optimal choice due to bitstream filtering).
Just keep the timestamps in input stream timebase, they will be rescaled
as needed before bitstream filtering and/or sending the packet to the
muxer.
Move the av_rescale_delta() call for audio (needed to preserve accuracy
with coarse demuxer timebases) to write_packet.
Drop now-unused OutputStream.mux_timebase.
frame is always != NULL for audio and video here
(this is checked by an assert and the frame is already dereferenced
before it reaches this code here).
Fixes Coverity issue #1538858.
Reviewed-by: Anton Khirnov <anton@khirnov.net>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Read the timebase from FrameData rather than the input stream. This
should fix#10393 and generally be more reliable.
Replace the use of '-1' to indicate demuxing timebase with the string
'demux'. Also allow to request filter timebase with
'-enc_time_base filter'.
It now contains data from multiple sources, so group those items that
always come from the decoder. Also, initialize them to invalid values,
so that frames that did not originate from a decoder can be
distinguished.
This is possible now that enc_open() is always called with a non-NULL
frame for audio/video.
Previously the code would directly reach into the buffersink, which is a
layering violation.
When no frames were passed from a filtergraph to an encoder, but the
filtergraph is configured (i.e. has output parameters), encoder flush
code will use those parameters to initialize the encoder in a last-ditch
effort to produce some useful output.
Rework this process so that it is triggered by the filtergraph, which
now sends a dummy frame with parameters, but no data, to the encoder,
rather than the encoder reaching backwards into the filter.
This approach is more in line with the natural data flow from filters to
encoders and will allow to reduce encoder-filter interactions in
following commits.
This code is tested by fate-adpcm-ima-cunning-trunc-t2-track1, which (as
confirmed by Zane) is supposed to produce empty output.