Stop using InputStream.dts for generating missing timestamps for decoded
frames, because it contains pre-decoding timestamps and there may be
arbitrary amount of delay between input packets and output frames (e.g.
dependent on the thread count when frame threading is used). It is also
in AV_TIME_BASE (i.e. microseconds), which may introduce unnecessary
rounding issues.
New code maintains a timebase that is the inverse of the LCM of all the
samplerates seen so far, and thus can accurately represent every audio
sample. This timebase is used to generate missing timestamps after
decoding.
Changes the result of the following FATE tests
* pcm_dvd-16-5.1-96000
* lavf-smjpeg
* adpcm-ima-smjpeg
In all of these the timestamps now better correspond to actual frame
durations.
That field was added to store timestamp conversion state for audio
decoding code. Later it started being used by streamcopy as well, but
that use is wrong because, for a given input stream, both decoding and
an arbitrary number of streamcopies may be performed simultaneously.
They would then all overwrite the same state variable.
Store this state in MuxStream instead.
This is the last use of InputStream in of_streamcopy(), so the ist
parameter can now be removed.
It stores codec parameters of the stream submitted to the muxer, which
may be different from the codec parameters in AVStream due to bitstream
filtering.
This avoids the confusing back and forth synchronisation between the
encoder, bitstream filters, and the muxer, now information flows only in
one direction. It also reduces the need for non-muxing code to access
AVStream.
Reduces access to a deeply nested muxer property
OutputStream.st->codecpar->codec_type for this fundamental and immutable
stream property.
Besides making the code shorter, this will allow making the AVStream
(OutputStream.st) private to the muxer in the future.
Set InputStream.decoding_needed/discard/etc. only from
ist_{filter,output},add() functions. Reduces the knowledge of
InputStream internals in muxing/filtering code.
When no timestamps are available from the container, the video decoding
code will currently use fake dts values - generated in
process_input_packet() based on a combination of information from the
decoder and the parser (obtained via the demuxer) - to generate
timestamps during decoder flushing. This is fragile, hard to follow, and
unnecessarily convoluted, since more reliable information can be
obtained directly from post-decoding values.
The new code keeps track of the last decoded frame pts and estimates its
duration based on a number of heuristics. Timestamps generated when both
pts and pkt_dts are missing are then simple pts+duration of the last frame.
The heuristics are somewhat complicated by the fact that lavf insists on
making up packet timestamps based on its highly incomplete information.
That should be removed in the future, allowing to further simplify this
code.
The results of the following tests change:
* h264-3386 now requires -fps_mode passthrough to avoid dropping frames
at the end; this is a pathology of the interaction of the new and old
code, and the fact that the sample switches from field to frame coding
in the last packet, and will be fixed in following commits
* hevc-conformance-DELTAQP_A_BRCM_4 stops inventing an arbitrary
timestamp gap at the end
* hevc-small422chroma - the single frame output by this test now has a
timestamp of 0, rather than an arbitrary 7
Currently, output streams where an input stream is sent directly (i.e.
not through lavfi) are determined by iterating over ALL the output
streams and skipping the irrelevant ones. This is awkward and
inefficient.
This option adds a long string of numbers to the progress line, where
i-th number contains the base-2 logarithm of the number of times a frame
with this QP value was seen by print_report().
There are multiple problems with this feature:
* despite this existing since 2005, web search shows no indication
that it was ever useful for any meaningful purpose;
* the format of what is printed is entirely undocumented, one has to
find it out from the source code;
* QP values above 31 are silently ignored;
* it only works with one video stream;
* as it relies on global state, it is in conflict with ongoing
architectural changes.
It then seems that the nontrivial cost of maintaining this option is not
worth its negligible (or possibly negative - since it pollutes the
already large option space) value.
Users who really need similar functionality can also implement it
themselves using -vstats.
Properly pass muxing return codes through the call stack instead.
Slightly changes behavior in case of errors:
* the output IO stream is closed even if writing the trailer returns an
error, which should be more correct
* all files get properly closed with -xerror, even if one of them fails
It is video encoding-only and does not need to be visible outside of
ffmpeg_enc.c
Also, rename the variable to frames_prev_hist to be consistent with
the naming in do_video_out().
This is more correct, but was not possible before the recently-added
filtergraph parsing API.
Also, only pass hw devices to filters that are flagged as capable of
using them.
Tested-by: Niklas Haas