Add a mechanism to AVClass to allow objects to signal their state to
generic code. When an object flags itself with the 'initialized' state,
print an error (and fail, after the next major bump) if the caller
attempts to set non-runtime options.
Will be used to export certain information present in HEIF samples, like
rotation metadata, ICC profiles, and potentially others.
Signed-off-by: James Almer <jamrial@gmail.com>
"int-list" options are a hack that provides rudimentary support for
array-type options by treating them as byte arrays (i.e.
AV_OPT_TYPE_BINARY). Since we now have proper array-type options, they
should replace "int-list" everywhere (which happens to be just
buffersink).
This replaces the myriad of existing lists in AVCodec by a unified API
call, allowing us to (ultimately) trim down the sizeof(AVCodec) quite
substantially, while also making this more trivially extensible.
In addition to the already covered lists, add two new entries for color
space and color range, mirroring the newly added negotiable fields in
libavfilter.
Once the deprecation period passes for the existing public fields, the
rough plan is to move the commonly used fields (such as
pix_fmt/sample_fmt) into FFCodec, possibly as a union of audio and video
configuration types, and then implement the rarely used fields with
custom callbacks.
Previously one could only replace the entire array with a new one
deserialized from a string. The new API allows inserting, replacing, and
removing arbitrary element ranges.
The B extension was finally ratified in May 2024, encompassing:
- Zba (addresses),
- Zbb (basics) and
- Zbs (single bits).
It does not include Zbc (base-2 polynomials).
And av_stream_get_codec_timebase().
They were both added for ffmpeg CLI, which no longer calls either of
them. Furthermore the notion of "internal stream timing info" that needs
to be transferred with a special magic API function is fundamentally
flawed and should be removed.
This lets us detect when a container has flagged a stream as multilayer.
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
av_mastering_display_metadata_alloc() is not useful in scenarios where you need to
know the runtime size of AVMasteringDisplayMetadata.
Signed-off-by: James Almer <jamrial@gmail.com>
* SMPTE ST 2128 IPT-C2 defines the coefficients utilized in DoVi
Profile 5. Profile 5 can thus now be represented in VUI as
{AVCOL_RANGE_JPEG, AVCOL_PRI_BT2020, AVCOL_TRC_SMPTE2084,
AVCOL_SPC_IPT_C2, AVCHROMA_LOC_LEFT} (although other chroma
sample locations are allowed). AVCOL_TRC_SMPTE2084 should in
this case be interpreted as 'PQ with reshaping'.
* YCgCo-Re and YCgCo-Ro define the bitexact YCgCo-R, where the
number of bits added to a source RGB bit depth is 2 (i.e., even)
and 1 (i.e., odd), respectively.
As well as accessors plus a function for allocating this struct with
extension blocks,
Definitions generously taken from quietvoid/dovi_tool, which is
assembled as a collection of various patent fragments, as well as output
by the official Dolby Vision bitstream verifier tool.
The NLQ pivots are not documented but should be present in the header
for profile 7 RPU format. It has been verified using Dolby's
verification toolkit.
Signed-off-by: quietvoid <tcChlisop0@gmail.com>
Signed-off-by: Niklas Haas <git@haasn.dev>
Yet another probesize used to get the durations when
estimate_timings_from_pts is required. It is aimed at users interested
in better durations probing for itself, or because using
avformat_find_stream_info indirectly and requiring exact values: for
concatdec for example, especially if streamcopying above it.
The current code is a performance trade-off that can fail to get video
stream durations in a scenario with high bitrates and buffering for
files ending cleanly (as opposed to live captures): the physical gap
between the last video packet and the last audio packet is very high in
such a case.
Default behaviour is unchanged: 250k up to 250k << 6 (step by step).
Setting this new option has two effects:
- override the maximum probesize (currently 250k << 6)
- reduce the number of steps to 1 instead of 6, this is to avoid
detecting the audio "too early" and failing to reach a video packet.
Even if a single audio stream duration is found but not the other
audio/video stream durations, there will be a retry, so at the end the
full user-overriden probesize will be used as expected by the user.
Signed-off-by: Nicolas Gaullier <nicolas.gaullier@cji.paris>