Their usefulness is questionable, very few decoders set them, and their type
should have been int64_t. A replacement field can be added later if a valid use
case is found.
Signed-off-by: Marton Balint <cus@passwd.hu>
Frame counters can overflow relatively easily (INT_MAX number of frames is
slightly more than 1 year for 60 fps content), so make sure we use 64 bit
values for them.
Also deprecate the old 32 bit frame_number attribute.
Signed-off-by: Marton Balint <cus@passwd.hu>
Many filters accept user-provided data that is cumbersome to provide as
text strings - e.g. binary files or very long text. For that reason such
filters typically provide a option whose value is the path from which
the filter loads the actual data.
However, filters doing their own IO internally is a layering violation
that the callers may not expect, and is thus best avoided. With the
recently introduced graph segment parsing API, loading option values
from files can now be handled by the caller.
This commit makes use of the new API in ffmpeg CLI. Any option name in
the filtergraph syntax can now be prefixed with a slash '/'. This will
cause ffmpeg to interpret the value as the path to load the actual value
from.
Callers currently have two ways of adding filters to a graph - they can
either
- create, initialize, and link them manually
- use one of the avfilter_graph_parse*() functions, which take a
(typically end-user-written) string, split it into individual filter
definitions+options, then create filters, apply options, initialize
filters, and finally link them - all based on information from this
string.
A major problem with the second approach is that it performs many
actions as a single atomic unit, leaving the caller no space to
intervene in between. Such intervention would be useful e.g. to
- modify filter options;
- supply hardware device contexts;
both of which typically must be done before the filter is initialized.
Callers who need such intervention are then forced to invent their own
filtergraph parsing, which is clearly suboptimal.
This commit aims to address this problem by adding a new modular
filtergraph parsing API. It adds a new avfilter_graph_segment_parse()
function to parse a string filtergraph description into an intermediate
tree-like representation (AVFilterGraphSegment and its children).
This intermediate form may then be applied step by step using further
new avfilter_graph_segment*() functions, with user intervention possible
between each step.
This script generates the current general assembly voters according to
the criteria of '20 commits in the last 36 months'.
Signed-off-by: J. Dekker <jdek@itanimul.li>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
libavutil/color_utils contains some avpriv_ symbols that map
enum AVTransferCharacteristic values to gamma-curve approximations and
to the actual transfer functions to invert them (i.e. -> linear).
There's two issues with this:
(1) avpriv is evil and should be avoided whenever possible
(2) libavutil/csp.h exposes a public API for handling color that
already handles primaries and matricies
I don't see any reason this API has to be private, so this commit takes
the functionality from avutil/color_utils and merges it into avutil/csp
with an exposed av_ API rather than the previous avpriv_ API.
Every reference to the previous API has been updated to point to the
new one. color_utils.h has been deleted as well. This should not break
any applications as it only contained avpriv_ symbols in the first
place, so nothing in that header could be referenced by other
applications.
Signed-off-by: Leo Izen <leo.izen@gmail.com>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Analogous to -enc_stats*, but happens right before muxing. Useful
because bitstream filters and the sync queue can modify packets after
encoding and before muxing. Also has access to the muxing timebase.
Current HLS implementation simply skip a failed segment to catch up
the stream, but this is not optimal for some use cases like livestream
recording.
Add an option to retry a failed segment to ensure the output file is
a complete stream.
Signed-off-by: gnattu <gnattuoc@me.com>
Reviewed-by: Steven Liu <liuqi05@kuaishou.com>
Splits the currently handled subtitle at random access point
packets that can be configured to follow a specific output stream.
Currently only subtitle streams which are directly mapped into the
same output in which the heartbeat stream resides are affected.
This way the subtitle - which is known to be shown at this time
can be split and passed to muxer before its full duration is
yet known. This is also a drawback, as this essentially outputs
multiple subtitles from a single input subtitle that continues
over multiple random access points. Thus this feature should not
be utilized in cases where subtitle output latency does not matter.
Co-authored-by: Andrzej Nadachowski <andrzej.nadachowski@24i.com>
Co-authored-by: Bernard Boulay <bernard.boulay@24i.com>
Signed-off-by: Jan Ekström <jan.ekstrom@24i.com>
This is intended to be a more convenient replacement for
reordered_opaque.
Add support for it in the two encoders that offer
AV_CODEC_CAP_ENCODER_REORDERED_OPAQUE: libx264 and libx265. Other
encoders will be supported in future commits.