The following command is on how to apply vflip_vulkan filter:
ffmpeg -init_hw_device vulkan -i input.264 -vf hwupload=extra_hw_frames=16,vflip_vulkan,hwdownload,format=yuv420p output.264
Signed-off-by: Wu Jianhua <jianhua.wu@intel.com>
The following command is on how to apply hflip_vulkan filter:
ffmpeg -init_hw_device vulkan -i input.264 -vf hwupload=extra_hw_frames=16,hflip_vulkan,hwdownload,format=yuv420p output.264
Signed-off-by: Wu Jianhua <jianhua.wu@intel.com>
The issue is that libavfilter depends on libavcodec, and when doing a
static build, if libavcodec also includes "libavfilter/vulkan.c", then
during link-time, compiling programs will fail as there would be multiple
definitions of the same symbols in both libavfilter and libavcodec's
object files.
Linkers are, however, more permitting if both files that include
a common file that's used as a template are one-to-one identical.
Hence, to make both files the same in the future, export all avfilter
specific functions to a separate file.
There is some work in progress to make templated files like this be
compiled only once, so this is not a long-term solution.
This also removes a macro that could be used to toggle SPIRV compilation
capability on #include-time, as this could cause the files to be different.
This commit adds a powerful and customizable gblur Vulkan filter,
which provides a maximum 127x127 kernel size of Gaussian Filter.
The size could be adjusted by requirements on quality or performance.
The following command is on how to apply gblur_vulkan filter:
ffmpeg -init_hw_device vulkan -i input.264 -vf hwupload=extra_hw_frames=16,gblur_vulkan,hwdownload,format=yuv420p output.264
Signed-off-by: Wu Jianhua <jianhua.wu@intel.com>
This filter conceptually maps the libplacebo `pl_renderer` API into
libavfilter, which is a high-level image rendering API designed to work
with an RGB pipeline internally. As such, there's no way to avoid e.g.
chroma interpolation with this filter, although new versions of
libplacebo support outputting back to subsampled YCbCr after processing
is done.
That being said, `pl_renderer` supports automatic integration of the
majority of libplacebo's shaders, ranging from debanding to tone
mapping, and also supports loading custom mpv-style user shaders, making
this API a natural candidate for getting a lot of functionality out of
relatively little code.
In the future, I may approach this problem either by rewriting this
filter to also support a non-renderer codepath, or by upgrading
libplacebo's renderer to support a full YCbCr pipeline.
This unfortunately requires a very new version of libplacebo (unreleased
at time of writing) for timeline semaphore support. But the amount of
boilerplate needed to hack in backwards compatibility would have been
very unreasonable.
Implements a gray world color correction algorithm
using a log scale LAB colorspace.
Signed-off-by: Paul Buxton <paulbuxton.mail@googlemail.com>
Signed-off-by: Paul B Mahol <onemda@gmail.com>
Add examples on how to use this filter, and improve the code style.
Implement the slice-level parallelism for guided filter.
Add the basic version of guided filter.
Signed-off-by: Xuewei Meng <xwmeng96@gmail.com>
Reviewed-by: Steven Liu <liuqi05@kuaishou.com>
classification is done on every detection bounding box in frame's side data,
which are the results of object detection (filter dnn_detect).
Please refer to commit log of dnn_detect for the material for detection,
and see below for classification.
- download material for classifcation:
wget https://github.com/guoyejun/ffmpeg_dnn/raw/main/models/openvino/2021.1/emotions-recognition-retail-0003.bin
wget https://github.com/guoyejun/ffmpeg_dnn/raw/main/models/openvino/2021.1/emotions-recognition-retail-0003.xml
wget https://github.com/guoyejun/ffmpeg_dnn/raw/main/models/openvino/2021.1/emotions-recognition-retail-0003.label
- run command as:
./ffmpeg -i cici.jpg -vf dnn_detect=dnn_backend=openvino:model=face-detection-adas-0001.xml:input=data:output=detection_out:confidence=0.6:labels=face-detection-adas-0001.label,dnn_classify=dnn_backend=openvino:model=emotions-recognition-retail-0003.xml:input=data:output=prob_emotion:confidence=0.3:labels=emotions-recognition-retail-0003.label:target=face,showinfo -f null -
We'll see the detect&classify result as below:
[Parsed_showinfo_2 @ 0x55b7d25e77c0] side data - detection bounding boxes:
[Parsed_showinfo_2 @ 0x55b7d25e77c0] source: face-detection-adas-0001.xml, emotions-recognition-retail-0003.xml
[Parsed_showinfo_2 @ 0x55b7d25e77c0] index: 0, region: (1005, 813) -> (1086, 905), label: face, confidence: 10000/10000.
[Parsed_showinfo_2 @ 0x55b7d25e77c0] classify: label: happy, confidence: 6757/10000.
[Parsed_showinfo_2 @ 0x55b7d25e77c0] index: 1, region: (888, 839) -> (967, 926), label: face, confidence: 6917/10000.
[Parsed_showinfo_2 @ 0x55b7d25e77c0] classify: label: anger, confidence: 4320/10000.
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Deprecated in c29038f304.
The resample filter based upon this library has been removed as well.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Signed-off-by: James Almer <jamrial@gmail.com>
Below are the example steps to do object detection:
1. download and install l_openvino_toolkit_p_2021.1.110.tgz from
https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html
or, we can get source code (tag 2021.1), build and install.
2. export LD_LIBRARY_PATH with openvino settings, for example:
.../deployment_tools/inference_engine/lib/intel64/:.../deployment_tools/inference_engine/external/tbb/lib/
3. rebuild ffmpeg from source code with configure option:
--enable-libopenvino
--extra-cflags='-I.../deployment_tools/inference_engine/include/'
--extra-ldflags='-L.../deployment_tools/inference_engine/lib/intel64'
4. download model files and test image
wget https://github.com/guoyejun/ffmpeg_dnn/raw/main/models/openvino/2021.1/face-detection-adas-0001.bin
wget https://github.com/guoyejun/ffmpeg_dnn/raw/main/models/openvino/2021.1/face-detection-adas-0001.xml
wget
https://github.com/guoyejun/ffmpeg_dnn/raw/main/models/openvino/2021.1/face-detection-adas-0001.label
wget https://github.com/guoyejun/ffmpeg_dnn/raw/main/images/cici.jpg
5. run ffmpeg with:
./ffmpeg -i cici.jpg -vf dnn_detect=dnn_backend=openvino:model=face-detection-adas-0001.xml:input=data:output=detection_out:confidence=0.6:labels=face-detection-adas-0001.label,showinfo -f null -
We'll see the detect result as below:
[Parsed_showinfo_1 @ 0x560c21ecbe40] side data - detection bounding boxes:
[Parsed_showinfo_1 @ 0x560c21ecbe40] source: face-detection-adas-0001.xml
[Parsed_showinfo_1 @ 0x560c21ecbe40] index: 0, region: (1005, 813) -> (1086, 905), label: face, confidence: 10000/10000.
[Parsed_showinfo_1 @ 0x560c21ecbe40] index: 1, region: (888, 839) -> (967, 926), label: face, confidence: 6917/10000.
There are two faces detected with confidence 100% and 69.17%.
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
This is Visual Information Fidelity (VIF) filter and one of the component
filters of VMAF. It outputs the average VIF score over all frames.
Signed-off-by: Ashish Singh <ashk43712@gmail.com>