Change the slice/parameter buffers to be allocated dynamically.
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
Signed-off-by: Jun Zhao <jun.zhao@intel.com>
Signed-off-by: Mark Thompson <sw@jkqxz.net>
Use AVCodecContext.compression_level rather than a private option,
replacing the H.264-specific quality option (which stays only for
compatibility).
This now works with the H.265 encoder in the i965 driver, as well as
the existing cases with the H.264 encoder.
(cherry picked from commit 19388a7200)
The non-H.26[45] codecs already use this form. Since we don't
currently generate I frames for codecs which support them separately
to IDR, the p_per_i variable is set to infinity by default so that it
doesn't interfere with any other calculation. (All the code for I
frames still exists, and it works for H.264 if set manually.)
(cherry picked from commit 6af014f402)
Previously this was leaking, though it actually hit an assert making
sure that the buffer had already been cleared when freeing the picture.
(cherry picked from commit 17aeee5832)
Only do this when building for a recent VAAPI version - initial
driver implementations were confused about the interpretation of the
framerate field, but hopefully this will be consistent everywhere
once 0.40.0 is released.
(cherry picked from commit ff35aa8ca4)
This includes a backward-compatibility hack to choose CBR anyway on
old drivers which have no CBR support, so that existing programs will
continue to work their options now map to VBR.
(cherry picked from commit f033ba470f)
This change makes the configured GOP size be respected exactly -
previously the value could be exceeded slightly due to flaws in the
frame type selection logic.
(cherry picked from commit 37fab0661a)
This change makes the configured GOP size be respected exactly -
previously the value could be exceeded slightly due to flaws in the
frame type selection logic.
While outwardly bizarre, this change makes the behaviour consistent
with other VAAPI encoders which sync to the encode /input/ picture in
order to wait for /output/ from the encoder. It is not harmful on
i965 (because synchronisation already happens in vaRenderPicture(),
so it has no effect there), and it allows the encoder to work on
mesa/gallium which assumes this behaviour.
(cherry picked from commit 086e4b58b5)
This allows better checking of capabilities and will make it easier
to add more functionality later.
It also commonises some duplicated code around rate control setup
and adds more comments explaining the internals.
(cherry picked from commit 80a5d05108)
No longer leaks memory when used with a driver with the "render does
not destroy param buffers" quirk (i.e. Intel i965).
(cherry picked from commit 221ffca631)
Fixes ticket #5871.
While outwardly bizarre, this change makes the behaviour consistent
with other VAAPI encoders which sync to the encode /input/ picture in
order to wait for /output/ from the encoder. It is not harmful on
i965 (because synchronisation already happens in vaRenderPicture(),
so it has no effect there), and it allows the encoder to work on
mesa/gallium which assumes this behaviour.
This allows better checking of capabilities and will make it easier
to add more functionality later.
It also commonises some duplicated code around rate control setup
and adds more comments explaining the internals.
Previously we would allocate a new one for every frame. This instead
maintains an AVBufferPool of them to use as-needed.
Also makes the maximum size of an output buffer adapt to the frame
size - the fixed upper bound was a bit too easy to hit when encoding
large pictures at high quality.
This prevents attempts to use unsupported modes, such as low-power
H.264 mode on non-Skylake targets. Also fixes a crash on invalid
configuration, when trying to destroy an invalid VA config/context.