This callback is optional and should therefore only be set
if it actually does something.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Fixes: out of array access.
Earlier code assumes that a unscaled bayer to yuvj420 converter exists
but the later code then skips yuvj420
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The `entries` value is read directly from the stream and used to
allocate memory. This change clamps `entries` to however many are
possible in the remaining atom or file size (whichever is smallest).
Fixes https://crbug.com/1429357
Signed-off-by: Dale Curtis <dalecurtis@chromium.org>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Do not construct the name manually from input file/stream indices.
This is a step towards avoiding the assumption that filtergraph inputs
are always fed by demuxers.
The computation is based on demuxer properties, so that is the more
appropriate place for it. Filter code just receives the desired
start time/duration.
Inside a function an extra ';' is a null statement;
but outside of it it simply must not happen.
So remove them.
Reviewed-by: Nuo Mi <nuomi2021@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
These arrays are currently accessed via uint16_t* pointers
although nothing guarantees their alignment. Furthermore,
this is problematic wrt the effective-type rules.
Fix this by using a union of arrays of uint8_t and uint16_t.
Reviewed-by: Nuo Mi <nuomi2021@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
max_bin_idx can be at most LMCS_MAX_BIN_SIZE - 1 here,
so pivot[LCMS_MAX_BIN_SIZE + 1] may be accessed,
but pivot has only LCMS_MAX_BIN_SIZE + 1 elements
(unless the values of pivot were so that it is always
assured that pivot[LCMS_MAX_BIN_SIZE] is always < sample
(which it is iff it is always < 2^bit_depth - 1)).
So reorder the checks.
Reviewed-by: Nuo Mi <nuomi2021@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
It used to be used with preallocated packet buffers with
the old encode API, but said API is no more and therefore
there is no reason for this to be public any more.
So deprecate it and use an internal replacement
for the encoders using it as an upper bound for the
size of their headers.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This is possible because the lifetime of both coincide.
Besides reducing the number of allocations this also simplifies
access to VAAPIFramesContext as one no longer has to
go through AVHWFramesInternal.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This is possible because the lifetime of both coincide.
Besides reducing the number of allocations this also simplifies
access to VAAPIDeviceContext as one no longer has to
go through AVHWDeviceInternal.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Also fixes a Clang warning:
"overlapping comparisons always evaluate to false
[-Wtautological-overlap-compare]"
Reviewed-by: James Almer <jamrial@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
By default the option "flv_metadata" (internally using the field
name "trust_metadata") is set to 0, meaning that we don't allocate
streams based on information in the metadata, only based on
actual streams we encounter. However the "datastream" metadata field
still would allocate a subtitle stream.
When muxing, the "datastream" field is added if either a data stream
or subtitle stream is present - but the same metadata field is used
to preemtively create a subtitle stream only. Thus, if the field
was added due to a data stream, not a subtitle stream, the demuxer
would create a stream which won't get any actual packets.
If there was such an extra, empty subtitle stream, running
avformat_find_stream_info still used to terminate within reasonable
time before 3749eede66. After that
commit, it no longer would terminate until it reaches the max
analyze duration, which is 90 seconds for flv streams (see
e6a084641a,
24fdf7334d and
f58e011a1f).
Before that commit (which removed the deprecated AVStream.codec), the
"st->codecpar->codec_id = AV_CODEC_ID_TEXT", set within the demuxer,
would get propagated into st->codec->codec_id by numerous
avcodec_parameters_to_context(st->codec, st->codecpar), then further
into st->internal->avctx->codec_id by update_stream_avctx within
read_frame_internal in libavformat/utils.c (demux.c these days).
Signed-off-by: Martin Storsjö <martin@martin.st>
if there's an audio layer with a single stream that can be rendered alone, mark it
as default. Otherwise, mark every stream as dependent.
Signed-off-by: James Almer <jamrial@gmail.com>