The pseudo palette allocation is optional now. But if it's still
allocated (like the internal get_buffer2 implementation does, for
compatibility), it shouldn't print a warning.
PSEUDOPAL pixel formats are not paletted, but carried a palette with the
intention of allowing code to treat unpaletted formats as paletted. The
palette simply mapped the byte values to the resulting RGB values,
making it some sort of LUT for RGB conversion.
It was used for 1 byte formats only: RGB4_BYTE, BGR4_BYTE, RGB8, BGR8,
GRAY8. The first 4 are awfully obscure, used only by some ancient bitmap
formats. The last one, GRAY8, is more common, but its treatment is
grossly incorrect. It considers full range GRAY8 only, so GRAY8 coming
from typical Y video planes was not mapped to the correct RGB values.
This cannot be fixed, because AVFrame.color_range can be freely changed
at runtime, and there is nothing to ensure the pseudo palette is
updated.
Also, nothing actually used the PSEUDOPAL palette data, except xwdenc
(trivially changed in the previous commit). All other code had to treat
it as a special case, just to ignore or to propagate palette data.
In conclusion, this was just a very strange old mechnaism that has no
real justification to exist anymore (although it may have been nice and
useful in the past). Now it's an artifact that makes the API harder to
use: API users who allocate their own pixel data have to be aware that
they need to allocate the palette, or FFmpeg will crash on them in
_some_ situations. On top of this, there was no API to allocate the
pseuo palette outside of av_frame_get_buffer().
This patch not only deprecates AV_PIX_FMT_FLAG_PSEUDOPAL, but also makes
the pseudo palette optional. Nothing accesses it anymore, though if it's
set, it's propagated. It's still allocated and initialized for
compatibility with API users that rely on this feature. But new API
users do not need to allocate it. This was an explicit goal of this
patch.
Most changes replace AV_PIX_FMT_FLAG_PSEUDOPAL with FF_PSEUDOPAL. I
first tried #ifdefing all code, but it was a mess. The FF_PSEUDOPAL
macro reduces the mess, and still allows defining FF_API_PSEUDOPAL to 0.
Passes FATE with FF_API_PSEUDOPAL enabled and disabled. In addition,
FATE passes with FF_API_PSEUDOPAL set to 1, but with allocation
functions manually changed to not allocating a palette.
This is for applications which want to explicitly check for invalid
UTF-8 manually, and take actions that are better than dropping invalid
subtitles silently. (It's pretty much silent because sporadic avcodec
error messages are so common that you can't reasonably display them in a
prominent and meaningful way in a application GUI.)
And remove the function altogether while at it. It's a duplicate of
another.
Reviewed-by: wm4 <nfxjfg@googlemail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
This number is definitely required when frame threading is enabled, so
add it here rather than forcing all users to handle it themselves.
DXVA2 contained this addition in specific code as well (therefore being
added twice in the internal case) - just remove it from there.
AVCodecContext.extra_hw_frames is added to the size of hardware frame
pools created by libavcodec for APIs which require fixed-size pools.
This allows the user to keep references to a greater number of frames
after decode, which may be necessary for some use-cases.
It is also added to the initial_pool_size value returned by
avcodec_get_hw_frames_parameters() if a fixed-size pool is required.
This removes the dependency that hardware pixel formats previously had on
AVHWAccel instances, meaning only those which actually do something need
exist after this patch.
Also updates avcodec_default_get_format() to be able to choose hardware
formats if either a matching device has been supplied or no additional
external configuration is required, and avcodec_get_hw_frames_parameters()
to use the hardware config rather than searching the old hwaccel list.
In commit 061a0c14bb ("decode: restructure the core decoding code"), the
deprecated avcodec_decode_* APIs were reworked so that they called into the
new avcodec_send_packet / avcodec_receive_frame API. This had the side effect
of prohibiting sending new packets containing data after a drain
packet, but in previous versions of FFmpeg this "worked" and some
applications relied on it.
To restore some compatibility, reset the codec if we receive a new non-drain
packet using the old API after draining has completed. While this does
not give the same behaviour as the old API did, in the majority of cases
it works and it does not require changes to any other part of the decoding
code.
Fixes ticket #6775
Signed-off-by: James Cowgill <jcowgill@debian.org>
Signed-off-by: Marton Balint <cus@passwd.hu>
This removes the dependency that hardware pixel formats previously had on
AVHWAccel instances, meaning only those which actually do something need
exist after this patch.
Also updates avcodec_default_get_format() to be able to choose hardware
formats if either a matching device has been supplied or no additional
external configuration is required, and avcodec_get_hw_frames_parameters()
to use the hardware config rather than searching the old hwaccel list.
The FF_CODEC_CAP_HWACCEL_REQUIRE_CLASS mechanism is deleted because it
no longer does anything (the codec already contains the pointers to the
matching hwaccels).
There is no reason to keep this intact when decoding failed, specially
as private_ref is supposed to always be NULL when a frame is returned to
the user.
Currently, AVHWAccels are looked up using a (codec_id, pixfmt) tuple.
This means it's impossible to have 2 decoders for the same codec and
using the same opaque hardware pixel format.
This breaks merging Libav's CUVID hwaccel. FFmpeg has its own CUVID
support, but it's a full stream decoder, using NVIDIA's codec parser.
The Libav one is a true hwaccel, which is based on the builtin software
decoders.
Fix this by introducing another field to disambiguate AVHWAccels, and
use it for our CUVID decoders. FF_CODEC_CAP_HWACCEL_REQUIRE_CLASS makes
this mechanism backwards compatible and optional.
This will be useful in the CUVID hwaccel. It should also eventually
replace current decoder-specific mechanisms used by various other
hwaccels.
Merges Libav commit 704311b294.
If the get_buffer() call fails, the frame might have some side data
already set. Make sure it gets freed.
Merges Libav commit de77671438.
Signed-off-by: James Almer <jamrial@gmail.com>
Commit b46a77f19d accidentally broke this (requested change that was
added to the patch later and which was not fully tested).
Signed-off-by: Mark Thompson <sw@jkqxz.net>
This adds a new API, which allows the API user to query the required
AVHWFramesContext parameters. This also reduces code duplication across
the hwaccels by introducing ff_decode_get_hw_frames_ctx(), which uses
the new API function. It takes care of initializing the hw_frames_ctx
if needed, and does additional error handling and API usage checking.
Support for VDA and Cuvid missing.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
This flag replaces the deprecated, non-prefixed HWACCEL_CODEC_CAP_EXPERIMENTAL
one.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: James Almer <jamrial@gmail.com>
Use the AVFrame.opaque_ref field. The original user's opaque_ref is
wrapped in the lavc struct and then unwrapped before the frame is
returned to the caller.
This new struct will be useful in the following commits.
Hardware accelerated decoding generally uses AVHWFramesContext for pool
allocation of hardware surfaces. These are setup to allocate surfaces
aligned to hardware and hwaccel API requirements. Due to the
architecture, av_hwframe_get_buffer() will return AVFrames with
the dimensions set to the aligned sizes.
This causes some decoders (like hevc) return these aligned size as
final frame size, instead of cropping them to the video's actual
dimensions. To make sure this doesn't happen, crop the frame to the
size the decoder expects when ff_get_buffer() is called.
Merges Libav commit 3fdf50f9e8.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
So a hwaccel can access avctx->hwaccel in init for whatever reason. This
is for the new d3d hwaccel API. We could create separate entrypoints for
each of the 3 hwaccel types (dxva2, d3d11va, new d3d11va), but this
seems nicer.
Merges Libav commit bd747b9226.
Signed-off-by: Diego Biurrun <diego@biurrun.de>
Hardware accelerated decoding generally uses AVHWFramesContext for pool
allocation of hardware surfaces. These are setup to allocate surfaces
aligned to hardware and hwaccel API requirements. Due to the
architecture, av_hwframe_get_buffer() will return AVFrames with
the dimensions set to the aligned sizes.
This causes some decoders (like hevc) return these aligned size as
final frame size, instead of cropping them to the video's actual
dimensions. To make sure this doesn't happen, crop the frame to the
size the decoder expects when ff_get_buffer() is called.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
Use avci->last_pkt_props to get the side data. Using |pkt| doesn't work
when FF_API_MERGE_SD is set, as the compressed side data is expanded into
|tmp|, leaving the original |pkt| unchanged.
Signed-off-by: James Almer <jamrial@gmail.com>
So a hwaccel can access avctx->hwaccel in init for whatever reason. This
is for the new d3d hwaccel API. We could create separate entrypoints for
each of the 3 hwaccel types (dxva2, d3d11va, new d3d11va), but this
seems nicer.
Signed-off-by: Diego Biurrun <diego@biurrun.de>
So, all frames and errors are correctly reported in order.
Also limit the numbers of error during draining to prevent infinite loop.
This fix fate failure with THREADS>=4:
make fate-h264-attachment-631 THREADS=4
This also reverts a755b725ec.
Suggested-by: wm4, Ronald S. Bultje, Marton Balint
Reviewed-by: w4 <nfxjfg@googlemail.com>
Reviewed-by: Ronald S. Bultje <rsbultje@gmail.com>
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Muhammad Faiz <mfcc64@gmail.com>
Currently, the new decoding API is pretty much just a wrapper around the
old deprecated one. This is problematic, since it interferes with making
full use of the flexibility added by the new API. The old API should
also be removed at some future point.
Reorganize the code so that the new send_packet/receive_frame functions
call the actual decoding directly and change the old deprecated
avcodec_decode_* functions into wrappers around the new API.
The new internal API for decoders is now changing as well. Before this
commit, it mirrors the public API, so the decoders need to implement
send_packet() and receive_frame() callbacks. This turns out to require
awkward constructs in both the decoders and the generic code. After this
commit, the decoders only implement the receive_frame() callback and
call a new internal function, ff_decode_get_packet() to obtain input
data, in the same manner to how the bitstream filters now work.
avcodec will now always make a reference to the input packet, which means
that non-refcounted input packets will be copied. Keeping the previous
behaviour, where this copy could sometimes be avoided, would make the
code significantly more complex and fragile for only dubious gains,
since packets are typically small and everyone who cares about
performance should use refcounted packets anyway.
The current code stores a pointer to the packet passed to the decoder,
which is then used during get_buffer() for timestamps and side data
passthrough. However, since this is a pointer to user data which we do
not own, storing it is potentially dangerous. It is also ill defined for
the new decoding API with split input/output.
Fix this problem by making an explicit internally owned copy of the
packet properties.