This algorithm has once again been refactored, this time leading to a
dropping of the old `tone_mapping_mode` field, to be replaced by a
single tunable hybrid mode with configurable strength.
We can approximately map the old modes onto the new API for backwards
compatibility. Replace deprecated enums by their integer equivalents to
safely preserve this API until the next bump.
Upstream deprecated the old ad-hoc, enum/intent-based gamut mapping API
and added a new API based on colorimetrically accurate gamut mapping
functions.
The relevant change for us is the addition of several new modes, as well
as deprecation of the old options. Update the documentation accordingly.
The current code relied on pl_queue eventually returning EOF back to the
caller, which didn't work in all situations (e.g. single frame input).
Also, the current code assumed that ff_inlink_acknowledge_status only
fired once, which was patently not true, as the above edge cases
demonstrated.
Solve both issues by keeping track of the acknowledged link status and
forwarding it (instead of trying to probe the pl_queue again) in the
event that we run out of queued input frames, as well as (in CFR mode)
when we pass the indicated status PTS.
This exposes libplacebo's frame mixing functionality to vf_libplacebo,
by allowing users to specify a desired target fps to output at. Incoming
frames will be smoothly resampled (in a manner determined by the
`frame_mixer` option, to be added in the next commit).
To generate a consistently timed output stream, we directly use the
desired framerate as the timebase, and simply output frames in
sequential order (tracked by the number of frames output so far).
To present compatibility with the current behavior, we keep track of a
FIFO of exact frame timestamps that we want to output to the user. In
practice, this is essentially equivalent to the current filter_frame()
code, but this design allows us to scale to more complicated use cases
in the future - for example, insertion of intermediate frames
(deinterlacing, frame doubling, conversion to fixed fps, ...)
This does not leverage any immediate benefits, but refactors and
prepares the codebase for upcoming changes, which will include the
ability to do deinterlacing and resampling (frame mixing).
This commit contains no functional change. The goal is merely to
separate the highly intertwined `filter_frame` and `process_frames`
functions into their separate concerns, specifically to separate frame
uploading (which is now done directly in `filter_frame`) from emitting a
frame (which is now done by a dedicated function `output_frame`).
The overall idea here is to be able to ultimately call `output_frame`
multiple times, to e.g. emit several output frames for a single input
frame.
For example
ffmpeg -f lavfi -i sine -af "aformat=cl=stereo|5.1|7.1,lowpass,aformat=cl=7.1|5.1|stereo" -f null -
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
Before this commit if allocation would fail in ff_add_channel_layout()
function, function would return negative error code and this would
cause wrong format pick up later. If allocation would not fail return
code would be 0 and then format negotiation would simply fail as code
would break from the loop but with wrong return code.
Error was introduced in 6aaac24d72 commit.
Fixes#6638
This is not public API, no it has no need for an alloc() and free()
functions. The struct can reside on stack.
Signed-off-by: James Almer <jamrial@gmail.com>