Its function is analogous to that of the fps filter, so filtering is a
more appropriate place for this.
The main practical reason for this move is that it places the encoding
sync queue right at the boundary between filters and encoders. This will
be important when switching to threaded scheduling, as the sync queue
involves multiple streams and will thus need to do nontrivial
inter-thread synchronization.
In addition to framerate conversion, the closely-related
* encoder timebase selection
* applying the start_time offset
are also moved to filtering.
Always use the functionality of the latter, which makes more sense as it
avoids losing keyframes due to vsync code dropping frames.
Deprecate the 'source_no_drop' value, as it is now redundant.
ifilter_send_eof() will fail if the input has no real or fallback
parameters, so there is no need to handle the case of some inputs being
in EOF state yet having no parameters.
It is badly named (should have been -top_field_first, or at least -tff),
underdocumented and underspecified, and (most importantly) entirely
redundant with the setfield filter.
This function converts packet timestamps from the input stream timebase
to OutputStream.mux_timebase, which may or may not be equal to the
actual output AVStream timebase (and even when it is, this may not
always be the optimal choice due to bitstream filtering).
Just keep the timestamps in input stream timebase, they will be rescaled
as needed before bitstream filtering and/or sending the packet to the
muxer.
Move the av_rescale_delta() call for audio (needed to preserve accuracy
with coarse demuxer timebases) to write_packet.
Drop now-unused OutputStream.mux_timebase.
Read the timebase from FrameData rather than the input stream. This
should fix#10393 and generally be more reliable.
Replace the use of '-1' to indicate demuxing timebase with the string
'demux'. Also allow to request filter timebase with
'-enc_time_base filter'.
It now contains data from multiple sources, so group those items that
always come from the decoder. Also, initialize them to invalid values,
so that frames that did not originate from a decoder can be
distinguished.
This is only a preparatory step to a fully threaded architecture and
does not yet make decoding truly parallel - the main thread will
currently submit a packet and wait until it has been fully processed by
the decoding thread before moving on. Decoder behavior as observed by
the rest of the program should remain unchanged. That will change in
future commits after encoders and filters are moved to threads and a
thread-aware scheduler is added.