For vaapi if the init_pool_size is not zero, the pool size is fixed.
This means max surfaces is init_pool_size, but when mapping vaapi
frame to qsv frame, the init_pool_size < nb_surface. The cause is that
vaapi_decode_make_config() config the init_pool_size and it is called
twice. The first time is to init frame_context and the second time is to
init codec. On the second time the init_pool_size is changed to original
value so the init_pool_size is lower than the reall size because
pool_size used to initialize frame_context need to plus thread_count and
3 (guarantee 4 base work surfaces). Now add code to make sure
init_pool_size is only set once. Now the following commandline works:
ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 \
-hwaccel_output_format vaapi -i input.264 \
-vf "hwmap=derive_device=qsv,format=qsv" \
-c:v h264_qsv output.264
Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
For film grain clip, vaapi_av1 decoder will cache additional 8
surfaces that will be used to store frames which apply film grain.
So increase the pool size by plus 8 to avoid leak of surface.
Signed-off-by: Fei Wang <fei.w.wang@intel.com>
Add function pointer field in vaapi_profile_map[], set profile_parser
for HEVC_REXT to find the exact va_profile.
Also add format map support.
Signed-off-by: Linjie Fu <linjie.fu@intel.com>
If vaEndPicture() failed in ff_vaapi_decode_issue(), free
the pic->slice_buffers.
Fixes the memory leak issue in ticket #7385
Signed-off-by: Linjie Fu <linjie.fu@intel.com>
Signed-off-by: Mark Thompson <sw@jkqxz.net>
Set the minimum version to 0.35.0 (libva 1.3.0) and remove redundant
configure tests. This also allows the proprietary libmfx fork of libva,
which always shows the version number 0.99.0 (independent of the actual
version).
This is an ABI change in libva2: previously the Intel driver had this
behaviour and it was implemented as a driver quirk, but now it is part
of the specification so all drivers must do it.
This has been deprecated in libva2 because hardware does not and will not
support it. Therefore never consider it for decode, and for encode assume
the user meant constrained baseline profile instead.
This adds a new API, which allows the API user to query the required
AVHWFramesContext parameters. This also reduces code duplication across
the hwaccels by introducing ff_decode_get_hw_frames_ctx(), which uses
the new API function. It takes care of initializing the hw_frames_ctx
if needed, and does additional error handling and API usage checking.
Support for VDA and Cuvid missing.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
When profile mismatch is allowed, use the highest supported profile for
VAAPI decoding.
Signed-off-by: Jun Zhao <jun.zhao@intel.com>
Signed-off-by: Mark Thompson <sw@jkqxz.net>
This is an ABI change in libva2: previously the Intel driver had this
behaviour and it was implemented as a driver quirk, but now it is part
of the specification so all drivers must do it.
This has been deprecated in libva2 because hardware does not and will not
support it. Therefore never consider it for decode, and for encode assume
the user meant constrained baseline profile instead.
Moves much of the setup logic for VAAPI decoding into lavc; the user
now need only provide the hw_frames_ctx.
(cherry picked from commit 123ccd07c5)
(cherry picked from commit 5e879b54a3)
(cherry picked from commit 0aec37e625)
(cherry picked from commit cfa4eb4fba)
The buffer map/unmap code was in an early version of this before it
was committed, but the unmap was never removed. While wrong, this
was harmless (and therefore unnoticed) because the buffers can't be
mapped at this point - all drivers just did nothing with the call.
When decoding interlaced pictures, the structure is reused to render
to the same surface twice. The parameter buffers were not being
cleared, which caused the i965 driver to error out.