It stores codec parameters of the stream submitted to the muxer, which
may be different from the codec parameters in AVStream due to bitstream
filtering.
This avoids the confusing back and forth synchronisation between the
encoder, bitstream filters, and the muxer, now information flows only in
one direction. It also reduces the need for non-muxing code to access
AVStream.
Reduces access to a deeply nested muxer property
OutputStream.st->codecpar->codec_type for this fundamental and immutable
stream property.
Besides making the code shorter, this will allow making the AVStream
(OutputStream.st) private to the muxer in the future.
Set InputStream.decoding_needed/discard/etc. only from
ist_{filter,output},add() functions. Reduces the knowledge of
InputStream internals in muxing/filtering code.
Creating a new output stream of a given type is currently done by
calling new_<type>_stream(), which all start by calling
new_output_stream() to allocate the stream and do common init, followed
by type-specific init.
Reverse this structure - the caller now calls the common function
ost_add() with the type as a parameter, which then calls the
type-specific function internally. This will allow adding common code
that runs after type-specific code in future commits.
Currently, output streams where an input stream is sent directly (i.e.
not through lavfi) are determined by iterating over ALL the output
streams and skipping the irrelevant ones. This is awkward and
inefficient.
The code currently uses lavfi for this, which creates a sort of
configuration dependency loop - the encoder should be ideally
initialized with information from the first audio frame, but to get this
frame one needs to first open the encoder to know the frame size. This
necessitates an awkward workaround, which causes audio handling to be
different from video.
With this change, audio encoder initialization is congruent with video.
Analogous to -enc_stats*, but happens right before muxing. Useful
because bitstream filters and the sync queue can modify packets after
encoding and before muxing. Also has access to the muxing timebase.
Splits the currently handled subtitle at random access point
packets that can be configured to follow a specific output stream.
Currently only subtitle streams which are directly mapped into the
same output in which the heartbeat stream resides are affected.
This way the subtitle - which is known to be shown at this time
can be split and passed to muxer before its full duration is
yet known. This is also a drawback, as this essentially outputs
multiple subtitles from a single input subtitle that continues
over multiple random access points. Thus this feature should not
be utilized in cases where subtitle output latency does not matter.
Co-authored-by: Andrzej Nadachowski <andrzej.nadachowski@24i.com>
Co-authored-by: Bernard Boulay <bernard.boulay@24i.com>
Signed-off-by: Jan Ekström <jan.ekstrom@24i.com>
Current code may, depending on the muxer, decide to use VSYNC_VFR tagged
with the specified framerate, without actually performing framerate
conversion. This is clearly wrong and against the documentation, which
states unambiguously that -r should produce CFR output for video
encoding.
FATE test changes:
* nuv-rtjpeg: replace -r with '-enc_time_base -1', which keeps the
original timebase. Output frames are now produced with proper
durations.
* filter-mpdecimate: just drop the -r option, it is unnecessary
* filter-fps-r: remove, this test makes no sense and actually
produces broken VFR output (with incorrect frame durations).
There are 8 of them and they are typically used together. Allows to pass
just this struct to forced_kf_apply(), which makes it clear that the
rest of the OutputStream is not accessed there.
Do it in set_dispositions() rather than during stream creation.
Since at this point all other stream information is known, this allows
setting disposition based on metadata, which implements #10015. This
also avoids an extra allocated string in OutputStream that was unused
after of_open().
Replace it with an array of streams in each InputFile. This is a more
accurate reflection of the actual relationship between InputStream and
InputFile.
Analogous to what was previously done to output streams in
7ef7a22251.
Specificaly, the of_add_attachments() call (which can add attachment
streams to the output) and the check whether the output file contains
any streams. They both logically belong in create_streams().
The current code will override the *_disable fields (set by -vn/-an
options) when creating output streams for unlabeled complex filtergraph
outputs, in order to disable automatic mapping for the corresponding
media type.
However, this will apply not only to automatic mappings, but to manual
ones as well, which should not happen. Avoid this by adding local
variables that are used only for automatic mappings.