This is more correct, but was not possible before the recently-added
filtergraph parsing API.
Also, only pass hw devices to filters that are flagged as capable of
using them.
Tested-by: Niklas Haas
Analogous to -enc_stats*, but happens right before muxing. Useful
because bitstream filters and the sync queue can modify packets after
encoding and before muxing. Also has access to the muxing timebase.
Splits the currently handled subtitle at random access point
packets that can be configured to follow a specific output stream.
Currently only subtitle streams which are directly mapped into the
same output in which the heartbeat stream resides are affected.
This way the subtitle - which is known to be shown at this time
can be split and passed to muxer before its full duration is
yet known. This is also a drawback, as this essentially outputs
multiple subtitles from a single input subtitle that continues
over multiple random access points. Thus this feature should not
be utilized in cases where subtitle output latency does not matter.
Co-authored-by: Andrzej Nadachowski <andrzej.nadachowski@24i.com>
Co-authored-by: Bernard Boulay <bernard.boulay@24i.com>
Signed-off-by: Jan Ekström <jan.ekstrom@24i.com>
Rather than the encoder timebase. Since the times are parsed as
microseconds, this will not reduce precision, except possibly when
chapter times are used and the chapter timebase happens to be better
aligned with the encoder timebase, which is unlikely.
This will allow parsing the keyframe times earlier (before encoder
timebase is known) in future commits.
There are 8 of them and they are typically used together. Allows to pass
just this struct to forced_kf_apply(), which makes it clear that the
rest of the OutputStream is not accessed there.
Do it in set_dispositions() rather than during stream creation.
Since at this point all other stream information is known, this allows
setting disposition based on metadata, which implements #10015. This
also avoids an extra allocated string in OutputStream that was unused
after of_open().
Replace it with an array of streams in each InputFile. This is a more
accurate reflection of the actual relationship between InputStream and
InputFile.
Analogous to what was previously done to output streams in
7ef7a22251.
The current adjustment of input start times just adjusts the tsoffset.
And it does so, by resetting the tsoffset to nullify the new start time.
This leads to breakage of -copyts, ignoring of input_ts_offset, breaking
of -isync as well as breaking wrap correction.
Fixed by taking cognizance of these parameters, and by correcting start times
just before sync offsets are applied.
This is similar to what was done before for output files and will allow
introducing demuxer-private state in future commits
Unlike for muxing, the code is moved to existing ffmpeg_demux.c rather
than to a new file. The reason is just file size - the demuxing code is
much smaller than muxing.
Now that we have proper options for defining display matrix
overrides, this should no longer be required.
fftools does not have its own versioning, so for now the define is
just set to 1 and disables the functionality if set to zero.
This enables overriding the rotation as well as horizontal/vertical
flip state of a specific video stream on the input side.
Additionally, switch the singular test that was utilizing the rotation
metadata to instead override the input display rotation, thus leading
to the same result.
Replace it with an array of streams in each OutputFile. This is a more
accurate reflection of the actual relationship between OutputStream and
OutputFile. This is easier to handle and will allow further
simplifications in future commits.
This is now possible since the code allocating OutputFile can see
sizeof(Muxer). Avoids the overhead and extra complexity of allocating
two objects instead of one.
Similar to what is done e.g. for AVStream/FFStream in lavf.