At least the new videotoolbox decoder does not actually set a frame if
end_frame fails. This causes the API to return success and signals that
a picture was decoded, even though AVFrame->data[0] is NULL.
Fix this by propagating end_frame errors.
Also, make hls_nal_unit() work only on the provided NAL unit, without
requiring a whole decoding context.
This will allow splitting this code for reuse by the parser.
It is used as get_bits argument and reading 0 bits doesn't make sense.
Signed-off-by: Andreas Cadhalpun <Andreas.Cadhalpun@googlemail.com>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
It is used as get_bits argument and reading 0 bits isn't supported.
Reviewed-by: Michael Niedermayer <michaelni@gmx.at>
Signed-off-by: Andreas Cadhalpun <Andreas.Cadhalpun@googlemail.com>
This change introduces basic support for HEVC decoding through vdpau.
Right now, there are problems with the nvidia driver/library implementation
that mean that frames are incorrectly laid out in memory when they are
returned from the decoder, and it is normally impossible to recover the
complete decoded frame due to loss of data from alignment inconsistencies.
I obviously hope that nvidia will be fixing it in due course - I've verified
the problems exist with their example application.
As such, this support is not useful for any real world application, but I
believe that it is correct (with the caveat that the mangled frames may hide
problems) and will work properly once the nvidia problem is fixed.
Right now it appears that any file encoded by x265 or nvenc is decoded
correctly, but that's because these files don't use a bunch of HEVC
features.
Quick summary:
Features that seem to work:
1) Short Term References
2) Scaling Lists
3) Tiling
Features with known problems:
1) Long Term References
It's hard to tell what's going on here. After I read the nvidia example
app that does not set the IsLongTerm flag on LTRs, and changed my code,
a bunch of frames using LTR started to display correctly, but there
are still samples with glitches that are related to LTRs.
In terms of real world files, both x265 and nvenc only use short term
refs from this list. The divx encoder seems similar.
Signed-off-by: Philip Langdale <philipl@overt.org>
Today, we track the short term RPS size for DXVA, but only if the
SliceHeader RPS is being used. Otherwise it's left uninitialized.
NVIDIA's VDPAU implementation requires that the size be accurately
tracked even if an SPS RPS is being used. In this case, it's really
counting the size of the RPS idx information, but you end up with
mangled output if the value is not accurate.
VDPAU also needs the size of the long term RPS.
Signed-off-by: Philip Langdale <philipl@overt.org>
Personally, I need the decoder to back out if get_format() returns no
usable pixel format. This didn't work because the error code was not
propagated down the call chain. This in turn happened because the
variable declaration removed in this patch shadowed the variable, whose
value is returned at the end of the function. Consequently, failures of
decode_nal_unit() were ignored in this place.
Reviewed-by: Andreas Cadhalpun <andreas.cadhalpun@googlemail.com>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
Personally, I need the decoder to back out if get_format() returns no
usable pixel format. This didn't work because the error code was not
propagated down the call chain. This in turn happened because the
variable declaration removed in this patch shadowed the variable, whose
value is returned at the end of the function. Consequently, failures of
decode_nal_unit() were ignored in this place.
Reviewed-by: Andreas Cadhalpun <andreas.cadhalpun@googlemail.com>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
Additionally always set the software pixel format, so it's available
even if ff_get_format() is not called later. This will be useful for
exporting stream parameters from init().
hevc seems to be the only place where the C implementation
of the av_clip function is explicitly selected, precluding
platform-specific optimizations
Signed-off-by: Peter Meerwald <pmeerw@pmeerw.net>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
The buffer pointers would be otherwise overwritten, causing a
leak on e.g. PERSIST_RPARAM_A_RExt_Sony_1.
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
Use edge emu buffers
And enable the code unconditionally
Speed difference without USE_SAO_SMALL_BUFFER and with the new code:
Decicycles: 26772->26220 (BO32), 83803->80942 (BO64)
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
cherry picked from commit 5d9f79edef2c11b915bdac3a025b59a32082f409
SAO edge filter uses pre-SAO pixel data on the left and top of the ctb, so
this data must be kept available. This was done previously by having 2
copies of the frame, one before and one after SAO.
This commit reduces the storage to just that, instead of the previous whole
frame.
Commit message taken from patch by Christophe Gisquet <christophe.gisquet@gmail.com>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>