The latter need not be save, because av_log() expects
to get a pointer to an AVClass-enabled structure
and not only a fake object. If this function were actually
be called in the following way:
const AVClass *avcl = avctx->av_class;
handle_mdcv(&avcl, );
the AVClass's item_name would expect to point to an actual
AVCodecContext, potentially leading to a segfault.
Reviewed-by: Jan Ekström <jeebjp@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This code uses the AVBPrint API for exactly one av_bprintf()
in a scenario in which a good upper bound for the needed
size of the buffer is available (with said upper bound being
much smaller than sizeof(AVBPrint)). So one can simply use
snprintf() instead. This also avoids the (always-false due to
the current size of the internal AVBPrint buffer) check for
whether the AVBPrint is complete.
Furthermore, the old code used AV_BPRINT_SIZE_AUTOMATIC
which implies that the AVBPrint buffer will never be
(re)allocated and yet it used av_bprint_finalize().
This has of course also been removed.
Reviewed-by: Jan Ekström <jeebjp@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
AVCodecContext extradata should be an AVCDecoderConfigurationRecord
when bitstream format is avcc. Simply concatenating the NALUs output
by x264_encoder_headers does not form a standard
AVCDecoderConfigurationRecord. The following cmd generates broken
file before the patch:
ffmpeg -i foo.mp4 -c:v libx264 -x264-params annexb=0 bar.mp4
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
Many users mistakenly think that hwaccel is an instance of a decoder,
and cannot find the corresponding decoder name in the logs. Log
hwaccel name so user know hwaccel has taken effect.
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
When the compiler chooses to inline put_amf_string(pb, ""),
the avio_write(pb, "", 0) can be avoided. Happens with
Clang-17 with -O1 and higher and GCC 13 with -O2 and higher
here.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
These were defined in a way compatible with the Vulkan HEVC acceleration, which
expects bitmasks, yet the fields were being overwritting on each loop with the
latest read value.
Signed-off-by: James Almer <jamrial@gmail.com>
While ensuring it's at least C11, the minimum supported version.
Also, enforce C11 on the host compiler, same as we already do for C11 on the
target compiler.
Tested-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: James Almer <jamrial@gmail.com>
The newer of these two are the separate integers for content light
level, introduced in 3952bf3e98c76c31594529a3fe34e056d3e3e2ea ,
with X265_BUILD 75. As we already require X265_BUILD of at least
89, no further conditions are required.
Both of these two structures were first available with X264_BUILD
163, so make relevant functionality conditional on the version
being at least such.
Keep handle_side_data available in all cases as this way X264_init
does not require additional version based conditions within it.
Finally, add a FATE test which verifies that pass-through of the
MDCV/CLL side data is working during encoding.
These two were added in 28e23d7f348c78d49a726c7469f9d4e38edec341
and 3558c1f2e97455e0b89edef31b9a72ab7fa30550 for version 0.9.0 of
SVT-AV1, which is also our minimum requirement right now.
In other words, no additional version limiting conditions seem
to be required.
Additionally, add a FATE test which verifies that pass-through of
the MDCV/CLL side data is working during encoding.
The second part of this condition is intended to check whether the
current quantisation group is in the first CTU column of the current
tile. The issue is that ctb_to_col_bd gives the x-ordinate of the first
column of the current tile *in CTUs*, while xQg gives the x-ordinate of
the quantisation group *in samples*. Rectify this by shifting xQg by
ctb_log2_size to get xQg in CTUs before comparing.
Fixes FFVVC issues #201 and #203.
This muxer does not have the AVFMT_NOSTREAMS flag; therefore
it is checked generically that there is at least a stream.
Reviewed-by: James Almer <jamrial@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Since e0da916b8f the ffmpeg utility has
held multiple frames output by the decoder in internal queues without
telling the decoder that it is going to do so. When the decoder has a
fixed-size pool of frames (common in some hardware APIs where the output
frames must be stored as an array texture) this could lead to the pool
being exhausted and the decoder getting stuck. Fix this by telling the
decoder to allocate additional frames according to the queue size.
In MPEG-2 user data, there can be different types of Closed Captions
formats embedded (A53, SCTE-20, or DVD). The current behavior of the
CC extraction code in the MPEG-2 decoder is to not be aware of
multiple formats if multiple exist, therefore allowing one format
to overwrite the other during the extraction process since the CC
extraction shares one output buffer for the normalized bytes.
This causes sources that have two CC formats to produce flawed output.
There exist real-world samples which contain both A53 and SCTE-20 captions
in the same MPEG-2 stream, and that manifest this problem. Example of symptom:
THANK YOU (expected) --> THTHANANK K YOYOUU (actual)
The solution is to pick only the first CC substream observed with valid bytes,
and ignore the other types. Additionally, provide an option for users
to manually "force" a type in the event that this matters for a particular
source.
Signed-off-by: Marth64 <marth64@proxyid.net>
PyTorch is an open source machine learning framework that accelerates
the path from research prototyping to production deployment. Official
website: https://pytorch.org/. We call the C++ library of PyTorch as
LibTorch, the same below.
To build FFmpeg with LibTorch, please take following steps as
reference:
1. download LibTorch C++ library in
https://pytorch.org/get-started/locally/,
please select C++/Java for language, and other options as your need.
Please download cxx11 ABI version:
(libtorch-cxx11-abi-shared-with-deps-*.zip).
2. unzip the file to your own dir, with command
unzip libtorch-shared-with-deps-latest.zip -d your_dir
3. export libtorch_root/libtorch/include and
libtorch_root/libtorch/include/torch/csrc/api/include to $PATH
export libtorch_root/libtorch/lib/ to $LD_LIBRARY_PATH
4. config FFmpeg with ../configure --enable-libtorch \
--extra-cflag=-I/libtorch_root/libtorch/include \
--extra-cflag=-I/libtorch_root/libtorch/include/torch/csrc/api/include \
--extra-ldflags=-L/libtorch_root/libtorch/lib/
5. make
To run FFmpeg DNN inference with LibTorch backend:
./ffmpeg -i input.jpg -vf \
dnn_processing=dnn_backend=torch:model=LibTorch_model.pt -y output.jpg
The LibTorch_model.pt can be generated by Python with torch.jit.script()
api. https://pytorch.org/tutorials/advanced/cpp_export.html. This is
pytorch official guide about how to convert and load torchscript model.
Please note, torch.jit.trace() is not recommanded, since it does
not support ambiguous input size.
Signed-off-by: Ting Fu <ting.fu@intel.com>
Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>