Compared to existing, common opensource H264 encoders, this can be
useful since it has got a different license (BSD instead of GPL).
Performance- and qualitywise it is comparable to x264 in ultrafast
mode.
Hooking it up as an encoder in libavcodec also simplifies comparing
it against other common encoders.
This requires OpenH264 1.3 or newer. Since the OpenH264 API and ABI
changes frequently, only releases are supported.
To take advantage of the OpenH264 patent offer, the OpenH264 library
must not be redistributed, but downloaded at runtime at the end-user's
system.
Signed-off-by: Martin Storsjö <martin@martin.st>
Since the VDPAU pixel format does not distinguish between different
VDPAU video surface chroma types, we need another way to pass this
data to the application.
Originally VDPAU in libavcodec only supported decoding to 8-bits YUV
with 4:2:0 chroma sampling. Correspondingly, applications assumed that
libavcodec expected VDP_CHROMA_TYPE_420 video surfaces for output.
However some of the new HEVC profiles proposed for addition to VDPAU
would require different depth and/or sampling:
http://lists.freedesktop.org/archives/vdpau/2014-July/000167.html
...as would lossless AVC profiles:
http://lists.freedesktop.org/archives/vdpau/2014-November/000241.html
To preserve backward binary compatibility with existing applications,
a new av_vdpau_bind_context() flag is introduced in a further change.
Signed-off-by: Rémi Denis-Courmont <remi@remlab.net>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
This can be used by the application to signal its ability to cope with
video surface of types other than 8-bits YUV 4:2:0.
Signed-off-by: Rémi Denis-Courmont <remi@remlab.net>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
This carries the pixel format that would be used if it were not for
hardware acceleration. This is equal to AVCodecContext.pix_fmt if
hardware acceleration is not in use.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
previously quality could only be set through qscale/global_quality but the scale
was inverted. Using a separate option avoids the confusion from qscale working
backward.
Reviewed-by: Benoit Fouet <benoit.fouet@free.fr>
Reviewed-by: Clément Bœsch <u@pkh.me>
Reviewed-by: Nicolas George <george@nsup.org>
Reviewed-by: James Almer <jamrial@gmail.com>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
For streams which contain DRC metadata, the FDK decoder is able to
control rendering of the decoded output. The rendering parameters
are detailed in fdk_aac_dec_options [].
The default behavior is left up to the decoder.
Signed-off-by: Martin Storsjö <martin@martin.st>
The FDK decoder is capable of producing mono and stereo downmix from
multichannel streams. These streams may contain metadata that control
the downmix process. The decoder requires an Ancillary Buffer in order to
correctly apply downmix in streams containing downmix Metadata. The
decoder does not have an API interface to inform of the presence of
Metadata in the stream, and therefore the Ancillary Buffer is always
allocated whenever a downmix is requested.
When downmixing multichannel streams, the decoder requires the output
buffer in aacDecoder_DecodeFrame call to be of fixed size in order to
hold the actual number of channels contained in the stream. For example,
for a 5.1ch to stereo downmix, the decoder requires that the output buffer
is allocated for 6 channels, regardless of the fact that the output is in
fact two channels.
Due to this requirement, the output buffer is allocated for the maximum
output buffer size in case a downmix is requested (and also during
decoder init). When a downmix is requested, the buffer used for output
during init will also be used for the entire duration the decoder is open.
Otherwise, the initial decoder output buffer is freed and the decoder
decodes straight into the output AVFrame.
Signed-off-by: Martin Storsjö <martin@martin.st>
When decoding, this field holds the inverse of the framerate that can be
written in the headers for some codecs. Using a field called 'time_base'
for this is very misleading, as there are no timestamps associated with
it. Furthermore, this field is used for a very different purpose during
encoding.
Add a new field, called 'framerate', to replace the use of time_base for
decoding.
Decoding acceleration may work even if the codec level is higher than
the stated limit of the VDPAU driver. Or the problem may be considered
acceptable by the user. This flag allows skipping the codec level
capability checks and proceed with decoding.
Applications should obviously not set this flag by default, but only if
the user explicitly requested this behavior (and presumably knows how
to turn it back off if it fails).
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Currently, the amount of padding inserted at the beginning by some audio
encoders, is exported through AVCodecContext.delay. However
- the term 'delay' is heavily overloaded and can have multiple different
meanings even in the case of audio encoding.
- this field has entirely different meanings, depending on whether the
codec context is used for encoding or decoding (and has yet another
different meaning for video), preventing generic handling of the codec
context.
Therefore, add a new field -- AVCodecContext.initial_padding. It could
conceivably be used for decoding as well at a later point.
The register function now specifies that the user callback should
leave things in the same state that it found them on failure but
that failure to destroy is ignored by the library. The register
function is now explicit about its behavior on failure
(it unregisters the previous callback and destroys all mutex).
Signed-off-by: Manfred Georg <mgeorg@google.com>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
This function provides an explicit VDPAU device and VDPAU driver to
libavcodec, so that the application is relieved from codec specifics
and VdpDevice life cycle management.
A stub flags parameter is added for future extension. For instance, it
could be used to ignore codec level capabilities (if someone feels
dangerous).
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Add CODEC_FLAG2_SKIP_MANUAL (exposed as "skip_manual"), which makes
the decoder export sample skip information via side data, instead
of applying it automatically. The format of the side data is the
same as AV_PKT_DATA_SKIP_SAMPLES, but since AVPacket and AVFrame
side data constants overlap, AV_FRAME_DATA_SKIP_SAMPLES needs to
be introduced.
This is useful for applications which want to do the timestamp
calculations manually, or which actually want to retrieve the
padding.
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
The encoder produces files that are no longer compatible with previous
versions of the decoder, and may actually cause decoding issues for other
software, so indicate that change to allow decoder quirks.
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
It causes build failures in some cases and the functions are provided by
libavutil so the wraper should not be needed anymore
Found-by: jamrial
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>