It currently emulates the long-removed
avcodec_decode_audio4/avcodec_decode_video2 APIs, which obfuscates the
actual decoding flow. Restructure the decoding calls so that they
naturally follow the new avcodec_send_packet()/avcodec_receive_frame()
design.
This is not only significantly easier to read, but also shorter.
It is currently handled in the same loop as audio and video, but this
obscures the actual flow, because only one iteration is ever performed
for subtitles.
Also, avoid a pointless packet reference.
process_input_packet() contains two non-interacting pieces of nontrivial
size and complexity - decoding and streamcopy. Separating them makes the
code easier to read.
New placement requires fewer explicit conditions and is easier to
understand.
The logic should be exactly equivalent, since this is the only place
where eof_reached is set for decoding.
Passing ist=NULL is currently used to identify stream types that do not
decode into AVFrames, i.e. subtitles. That is highly non-obvious -
always pass a non-NULL InputStream and just check the type explicitly.
It tracks whether the decoder for this stream ever produced any frames
and its only use is for checking whether a filter input ever received a
frame - those that did not are prioritized by the scheduler.
This is awkward and unnecessarily complicated - checking whether the
filtergraph input format is valid works just as well and does not
require maintaining an extra variable.
In case no decoder is available, dec_open() called from ist_use() will
fail with 'Decoding requested, but no decoder found', so this check is
redundant.
When a filtergraph input receives EOF but never saw any input frames, we
use the fallback parameters. Currently an attempt to actually configure
the filtergraph will happen elsewhere, but there is no reason to
postpone this.
With complex filtergraphs it can happen that the filtergraph is
unconfigured because some other filter than the one we just got EOF on
is missing parameters.
Make sure that the fallback parametes for a given input are only used
when that input is unconfigured.
This algorithm has once again been refactored, this time leading to a
dropping of the old `tone_mapping_mode` field, to be replaced by a
single tunable hybrid mode with configurable strength.
We can approximately map the old modes onto the new API for backwards
compatibility. Replace deprecated enums by their integer equivalents to
safely preserve this API until the next bump.
Upstream deprecated the old ad-hoc, enum/intent-based gamut mapping API
and added a new API based on colorimetrically accurate gamut mapping
functions.
The relevant change for us is the addition of several new modes, as well
as deprecation of the old options. Update the documentation accordingly.
The current code relied on pl_queue eventually returning EOF back to the
caller, which didn't work in all situations (e.g. single frame input).
Also, the current code assumed that ff_inlink_acknowledge_status only
fired once, which was patently not true, as the above edge cases
demonstrated.
Solve both issues by keeping track of the acknowledged link status and
forwarding it (instead of trying to probe the pl_queue again) in the
event that we run out of queued input frames, as well as (in CFR mode)
when we pass the indicated status PTS.
Fixes: signed integer overflow: -159584 * 5105950 cannot be represented in type 'int'
Fixes: 55165/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_BONK_fuzzer-5796023719297024
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This should be a few nano seconds faster (not measureable)
But Collectively the whole humankind watching hls will safe a minute
Found-by: Leo Izen
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>