Since at least 4.4.3, -ab/-b:a help text was in the video section
of ffmpeg -h, but these are audio options.
Signed-off-by: Marth64 <marth64@proxyid.net>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Current HLS implementation simply skip a failed segment to catch up
the stream, but this is not optimal for some use cases like livestream
recording.
Add an option to retry a failed segment to ensure the output file is
a complete stream.
Signed-off-by: gnattu <gnattuoc@me.com>
Reviewed-by: Steven Liu <liuqi05@kuaishou.com>
We parse the fallback cHRM on decode and correctly determine that we
have BT.709 primaries, but unknown TRC. This causes us to write cICP
where we shouldn't. Primaries without transfer can be handled entirely
by cHRM, so we should only write cICP if we actually know the transfer
function.
Additionally, we should avoid writing cICP if there's an ICC profile
because the spec says decoders must prioritize cICP over the ICC
profile.
Signed-off-by: Leo Izen <leo.izen@gmail.com>
We need to construct the output format list separatedly from the input
format list, because we need to adhere to two extra requirements:
1. Big-endian output formats are always unsupported (runtime error)
2. Combining 'vulkan' with an explicit out_format that is not supported
by the vulkan frame allocation code is illegal and will crash (abort)
As a free side benefit, this rewrite fixes a possible memory leak in the
`fail` path that was present in the old code.
Signed-off-by: Niklas Haas <git@haasn.dev>
It only works on Linux
$ ffmpeg -loglevel verbose -init_hw_device qsv=intel -f lavfi -i \
yuvtestsrc -vf "format=uyvy422,vpp_qsv=format=nv12" -f null -
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
The SDK supports UYVY from version 1.17, and VPP may support UYVY
input on Linux [1]
$ ffmpeg -loglevel verbose -init_hw_device qsv=intel -f lavfi -i \
yuvtestsrc -vf \
"format=uyvy422,hwupload=extra_hw_frames=32,vpp_qsv=format=nv12" \
-f null -
[1] https://github.com/Intel-Media-SDK/MediaSDK/blob/master/doc/samples/readme-vpp_linux.md
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
Since many distributions ship libjxl 0.7.0 still, we'd still prefer to
compile against that, but don't want to lose the features that require
libjxl 0.8.0 or greater. For this reason I've added preprocessor #ifdef
guards around the features that aren't necessarily in libjxl 0.7.0.
Signed-off-by: Leo Izen <leo.izen@gmail.com>
Splits the currently handled subtitle at random access point
packets that can be configured to follow a specific output stream.
Currently only subtitle streams which are directly mapped into the
same output in which the heartbeat stream resides are affected.
This way the subtitle - which is known to be shown at this time
can be split and passed to muxer before its full duration is
yet known. This is also a drawback, as this essentially outputs
multiple subtitles from a single input subtitle that continues
over multiple random access points. Thus this feature should not
be utilized in cases where subtitle output latency does not matter.
Co-authored-by: Andrzej Nadachowski <andrzej.nadachowski@24i.com>
Co-authored-by: Bernard Boulay <bernard.boulay@24i.com>
Signed-off-by: Jan Ekström <jan.ekstrom@24i.com>
This way we can call process_subtitles without causing the decoded
frame counter to get bumped.
Additionally, this now takes into mention all of the decoded
subtitle frames without fix_sub_duration latency/buffering, or filtering
out decoded reset/end subtitles without any rendered rectangles, which
matches the original intent in 4754345027
.
Signed-off-by: Jan Ekström <jan.ekstrom@24i.com>
This enables us to later call this when generating additional
subtitles for splitting purposes.
Co-authored-by: Andrzej Nadachowski <andrzej.nadachowski@24i.com>
Signed-off-by: Jan Ekström <jan.ekstrom@24i.com>
QSVDeintContext and VPPContext have the same base context, and all
features in deinterlace_qsv are implemented in vpp_qsv filter, so
deinterlace_qsv can be taken as a special case of vpp_qsv filter, and we
may use VPPContext with a different option array, preinit callback and
support pixel formats to implement deinterlace_qsv, then remove
QSVDeintContext.
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
Like what we did for scale_qsv filter, we use QSVVPPContext as a base
context to manage MFX session for deinterlace_qsv filter.
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
This is used to control the output at frame rate or field rate when
deinterlace is expected and framerate is not specified.
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
The decoder is quite slow with max n taps
Fixes: Timeout
Fixes: 54063/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_BONK_fuzzer-5087362407596032
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
From x86inc:
> On AMD cpus <=K10, an ordinary ret is slow if it immediately follows either
> a branch or a branch target. So switch to a 2-byte form of ret in that case.
> We can automatically detect "follows a branch", but not a branch target.
> (SSSE3 is a sufficient condition to know that your cpu doesn't have this problem.)
x86inc can automatically determine whether to use REP_RET rather than
REP in most of these cases, so impact is minimal. Additionally, a few
REP_RETs were used unnecessary, despite the return being nowhere near a
branch.
The only CPUs affected were AMD K10s, made between 2007 and 2011, 16
years ago and 12 years ago, respectively.
In the future, everyone involved with x86inc should consider dropping
REP_RETs altogether.
libjxl only accepts 16-bit buffers with its API, but it can
accept 9-bit to 15-bit input via a 16-bit buffer, provided the flag
is set declaring the buffer to be of the respective significant depth.
Likewise, it can only provide pixel data on decode as a 16-bit buffer
(if higher than 8) but does provide the metadata tagging the actual bit
depth.
This commit causes libjxlenc.c and libjxldec.c to respect this metadata
and tag/read it accordingly from AVCodecContext->bits_per_raw_sample.
Signed-off-by: Leo Izen <leo.izen@gmail.com>
mfenc sets FF_CODEC_CAP_INIT_CLEANUP, so calling mf_close() on
failure inside mf_init() results in a double-free.
Signed-off-by: Cameron Gutman <aicommander@gmail.com>
Signed-off-by: Martin Storsjö <martin@martin.st>
Only warn if the advanced_editlist option is enabled (it is enabled
by default though) so we don't print one warning for each track, and
demote the warning to AV_LOG_LEVEL_VERBOSE; this message does get
generated whenever parsing a fragmented MP4 file, regardless of
whether the file actually uses multiple edits or not.
Later when parsing the mov structures, the demuxer does warn if
the file did contain multiple edits which would require the
advanced_editlist option enabled for decoding correctly.
Adjust the warning message for the case when the file seemed like it
actually would have needed handling of advanced edit lists, to
reflect the fact that it doesn't help to try set the option as
it has been automatically disabled.
Signed-off-by: Martin Storsjö <martin@martin.st>
The construct of using offsetof on a (potentially anonymous) struct
defined within the offsetof expression, while supported by all
current compilers, has been declared explicitly undefined by the
C standards committee [1].
Clang recently got a change to identify this as an issue [2];
initially it was treated as a hard error, but it was soon after
softened into a warning under the -Wgnu-offsetof-extensions option
(not enabled automatically as part of -Wall though).
Nevertheless - in this particular case, it's trivial to fix the
code not to rely on the construct that the standards committee has
explicitly called out as undefined.
[1] https://www.open-std.org/jtc1/sc22/wg14/www/docs/n2350.htm
[2] https://reviews.llvm.org/D133574
Signed-off-by: Martin Storsjö <martin@martin.st>