Fixes: CID1598548 Logically dead code
Sponsored-by: Sovereign Tech Fund
Reviewed-by: "Xiang, Haihao" <haihao.xiang@intel.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Currently it always copies the metadata fields from the last input when
there are multiple inputs for the filter. For example, the metadata
fields from input1 are copied to the output for overlay_qsv filter,
however for regular overlay filters, the metadata fields from input0 are
copied to the output. With this fix, we may copy the metadata fields
from input0 to the ouput as well.
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
There are lots of files that don't need it: The number of object
files that actually need it went down from 2011 to 884 here.
Keep it for external users in order to not cause breakages.
Also improve the other headers a bit while just at it.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
We will postpone the vpp session initialization to when input and output
frames are ready, this copy of the sequence parameters will be used to
initialize vpp session.
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
It only works on Linux
$ ffmpeg -loglevel verbose -init_hw_device qsv=intel -f lavfi -i \
yuvtestsrc -vf "format=uyvy422,vpp_qsv=format=nv12" -f null -
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
The same members between QSVVPPContext and VPPContext are removed from
VPPContext, and async_depth is moved from QSVVPPParam to QSVVPPContext
so that all QSV filters using QSVVPPContext may support async depth.
In addition, we may use QSVVPPContext as base context in other QSV
filters in the future so that we may re-use functions defined in
qsvvpp.c for other QSV filters.
This commit shouldn't change the functionality of vpp_qsv / overlay_qsv.
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
When process yuv420 frames, FFmpeg uses same alignment on Y/U/V
planes. VPL and MSDK use Y plane's pitch / 2 as U/V planes's
pitch, which makes U/V planes 16-bytes aligned. We need to set
a separate alignment to meet runtime's behaviour.
Now alignment is changed to 16 so that the linesizes of U/V planes
meet the requirment of VPL/MSDK. Add get_buffer.video callback to
qsv filters to change the default get_buffer behaviour.
Now the commandline works fine:
ffmpeg -f rawvideo -pix_fmt yuv420p -s:v 3082x1884 \
-i ./3082x1884.yuv -vf 'vpp_qsv=w=2466:h=1508' -f rawvideo \
-pix_fmt yuv420p 2466_1508.yuv
Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
VPP in the SDK requires the frame rate to be set to a valid value,
otherwise init will fail, so always set a default framerate when the
input link doesn't have a valid framerate.
Reviewed-by: Soft Works <softworkz@hotmail.com>
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
It means more than one output is ready when
MFXVideoVPP_RunFrameVPPAsync() returns MFX_ERR_MORE_SURFACE [1].
Currently the returned value from MFXVideoVPP_RunFrameVPPAsync() might
be overridden, so the check of 'ret == MFX_ERR_MORE_SURFACE' is always
false when MFX_ERR_MORE_SURFACE is returned from
MFXVideoVPP_RunFrameVPPAsync()
[1] https://github.com/Intel-Media-SDK/MediaSDK/blob/master/doc/mediasdk-man.md#video-processing-procedures
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
GPU hang is one of the most typical errors on Intel GPUs in
case something goes wrong. It's important to recognize it
explicitly for easier bugs triage. Also, this error code
can be used to trigger GPU recovery path in self-written
applications.
There were 2 other statuses which MediaSDK can ppotentially return,
MFX_ERR_NONE_PARTIAL_OUTPUT and MFX_ERR_REALLOC_SURFACE. Adding
them as well.
v2: move MFX_ERR_NONE_PARTIAL_OUTPUT next to MFX_WRN_* (Haihao)
Signed-off-by: Hon Wai Chow <hon.wai.chow@intel.com>
Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
Adding DX11 relevant device type checks and adjusting callback with
proper MediaSDK pair type support.
Signed-off-by: Artem Galin <artem.galin@intel.com>
The function ff_qsvvpp_filter_frame should return a FFmpeg error code if
there is an error. However it might return a SDK error code without this
patch.
Reviewed-by: Soft Works <softworkz@hotmail.com>
Dump iopattern mode and the SDK error/warning desciptions for qsv based
filters and iopattern mode for qsvenc
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
Signed-off-by: Linjie Fu <linjie.justin.fu@gmail.com
It is a copy of the relevant part in lavc/qsv but use different function
names to avoid multiple definition when linking lavc and lavf statically.
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
Signed-off-by: Linjie Fu <linjie.justin.fu@gmail.com
Currently, picref will be freed by calling av_frame_free(&picref) in
submit_frame() in qsvvpp.c when working in system memory mode,and normally it
is freed in filter_frame() in vf_vpp_qsv.c when working in other modes.
Double free happens when working in system memory mode, remove to
fix the memory issue.
Reproduce:
ffmpeg -init_hw_device qsv=foo -filter_hw_device foo -f rawvideo -pix_fmt nv12 -s:v 852x480 \
-i 852x480.nv12 -vf 'vpp_qsv=w=500:h=400' -f rawvideo -pix_fmt nv12 qsv.nv12
Signed-off-by: Linjie Fu <linjie.fu@intel.com>
Signed-off-by: Zhong Li <zhong.li@intel.com>
RGB32(AV_PIX_FMT_BGRA on intel platforms) format may be used as overlay with alpha blending.
So add AV_PIX_FMT_BGRA format support.
One example of alpha blending overlay: ffmpeg -hwaccel qsv -c:v h264_qsv -i BA1_Sony_D.jsv
-filter_complex 'movie=lena-rgba.png,hwupload=extra_hw_frames=16[a];[0:v][a]overlay_qsv=x=10:y=10'
-c:v h264_qsv -y out.mp4
Rename RGB32 to be BGRA to make it clearer as Mark Thompson's suggestion.
V2: Add P010 format support else will introduce HEVC 10bit encoding regression.
Thanks for LinJie's discovery.
Signed-off-by: Zhong Li <zhong.li@intel.com>
Verified-by: Fu, Linjie <linjie.fu@intel.com>
Solve some issues found by an automated code scansion.
Suppress the complain "variables 'handle' is used but maybe
uninitialized".
Signed-off-by: Zhong Li <zhong.li@intel.com>
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
For filters based on framesync, the input frame was managed
by framesync, so we should not directly keep and destroy it,
instead we make a clone of it here, or else double-free will occur.
But for other filters not based on framesync, we still need to
free the input frame inside filter_frame.
Signed-off-by: Ruiling Song <ruiling.song@intel.com>
The filter supports two inputs and (implicitly) scaling the second input
during composition, unlike the software overlay.
The code has been separated into common interface and qsv overlay
implementation. The common part mainly creates the qsv session and
manages the surface which is nearly the same for all qsv filters.
So the qsvvpp.c/qsvvpp.h API can be used by other QSV vpp filters
to reduce code redundancy.
Usage:
-hwaccel qsv -c:v mpeg2_qsv -r 25 -i in.m2v -hwaccel qsv -c:v h264_qsv
-i in.h264 -filter_complex
"overlay_qsv=eof_action=repeat:x=(W-w)/2:y=(H-h)/2" -b 2M -maxrate 3M
-c:v h264_qsv -y out.h264
Two inputs should have different sizes otherwise one will be completely
covered or you need to scale the second input as follows:
-hwaccel qsv -c:v mpeg2_qsv -r 25 -i in.m2v -hwaccel qsv -c:v h264_qsv
-i in.h264 -filter_complex
"overlay_qsv=w=720:h=576:x=(W-w)/2:y=(H-h)/2" -b 2M -maxrate 3M -c:v
h264_qsv -y out.h264
Signed-off-by: ChaoX A Liu <chaox.a.liu@gmail.com>
Signed-off-by: Zhengxu Huang <zhengxu.maxwell@gmail.com>
Signed-off-by: Andrew Zhang <huazh407@gmail.com>
Change-Id: I5c381febb0af6e2f9622c54ba00490ab99d48297
Signed-off-by: Maxym Dmytrychenko <maxim.d33@gmail.com>