Move libavcodec managed objects from the public struct vaapi_context
to a new privately owned FFVAContext. This is done so that to clean up
and streamline the public structure, but also to prepare for new codec
support, thus requiring new internal data to be added in there.
The AVCodecContext.hwaccel_context, that holds the public vaapi_context,
shall no longer be accessed from within vaapi_*.c codec support files.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Currently, when forcing an I frame, via API, or via the ffmpeg cli,
using -force_key_frames, we still let x264 decide what sort of
keyframe to user. In some cases, it is useful to be able to force
an IDR frame, e.g. for cutting streams.
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
This option is extremely codec specific and only a few codecs employ it.
Move it to codec private options instead: mpegenc family supports only 3
values, xavs and x264 use 5, and xvid has a different metric entirely.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
The stats are a superset of the quality factor, also allowing the picture type and encoder "PSNR" stats to be exported
This also replaces the native by fixed little endian order for the affected side data
AV_PKT_DATA_QUALITY_FACTOR is left as a synonym of AV_PKT_DATA_QUALITY_STATS
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The rationale is that coded_frame was only used to communicate key_frame,
pict_type and quality to the caller, as well as a few other random fields,
in a non predictable, let alone consistent way.
There was agreement that there was no use case for coded_frame, as it is
a full-sized AVFrame container used for just 2-3 int-sized properties,
which shouldn't even belong into the AVCodecContext in the first place.
The appropriate AVPacket flag can be used instead of key_frame, while
quality is exported with the new AVPacketSideData quality factor.
There is no replacement for the other fields as they were unreliable,
mishandled or just not used at all.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
This is necessary to preserve the quality information currently exported
with coded_frame. Add the new side data to every encoder that needs it,
and use it in avconv.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
They are used by dnxhd and mpegvideo_enc exclusively, move them to codec
private options instead.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
ELS and ePIC decoder courtesy of Maxim Poliakovski,
cleanup and integration by Diego Biurrun.
Signed-off-by: Diego Biurrun <diego@biurrun.de>
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
This HWAccel isn't really usable right now due to an nvidia driver bug,
so we don't want it selected by default.
HWAccels have a capabilities field and there's a comment about flags,
but no flags exist today, so let's add one for experimental hwaccels.
Ideally this should be discarded by the demuxer but this is not
possible without fully parsing which would be then very similar
to this. The current ID3v1 discard code in the demuxer does not work
and will be removed in a subsequent commit
The discard code could be adjusted if needed to also discard tags at
other locations than the end or to limit this possibly to input
from the mp3 demuxer or even to move the discarding to the
decoder.
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
Bump the minimum libvpx version to 1.3.0 and rework the configure logic
to fail only if no decoders and encoders are found.
Based on the original patch from Vittorio.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
WebPAnimEncoder API is a combination of encoder (WebPEncoder) and muxer
(WebPMux). It performs several optimizations to make it more efficient
than the combination of WebPEncode() and native ffmpeg muxer.
When WebPAnimEncoder API is used:
- In the encoder layer: we use WebPAnimEncoderAdd() instead of
WebPEncode().
- The muxer layer: works like a raw muxer.
On the other hand, when WebPAnimEncoder API isn't available, the old code is
used as it is:
- In the codec layer: WebPEncode is used to encode each frame
- In the muxer layer: ffmpeg muxer is used
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
This is useful for client programs to ask for nv12 surfaces instead of the
current default (uyvy), since those are more efficient to decode to.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>