Add an AV_PIX_FMT_NE macro for RGB32FBE/RGB32FLE and also one for
RGBA32FBE/RGBA32FLE for packed 32-bit float RGB samples, and also
packed 32-bit float RGBA samples, respectively.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Leo Izen <leo.izen@gmail.com>
Add PCR at keyframe can be undesirable when -pcr_period is
specified. Add an flag to disable this behavior.
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
Apparently this option was intended (the context contains a
currently-unused frame_rate field), but was never added. This results in
the output timebase being unset after config_output(), so the input
audio timebase ends up being used for video output, which is clearly
wrong.
Add an option for setting output video framerate. Also set output frame
durations.
It has been deprecated in favor of the aresample filter for almost 10
years.
Another thing this option can do is drop audio timestamps and have them
generated by the encoding code or the muxer, but
- for encoding, this can already be done with the setpts filter
- for muxing this should almost never be done as timestamp generation by
the muxer is deprecated, but people who really want to do this can use
the setts bitstream filter
Users can't make anything with its content.
Making it opaque might allow us to avoid one level of indirection.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
avcodec_enum_to_chroma_pos() and avcodec_chroma_pos_to_enum()
deal with enum AVChromaLocation which is defined in lavu.
These functions are therefore replaced by
av_chroma_location_enum_to_pos() and av_chroma_location_pos_to_enum().
This commit provides the necessary deprecations. Also already make
these functions wrappers around the corresponding lavu functions
as not doing so would force one to disable deprecation warnings.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
They are intended as replacements for avcodec_enum_to_chroma_pos()
and avcodec_chroma_pos_to_enum().
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
They are also frequently used in libavformat.
This change does not cause any breakage as avcodec.h
includes defs.h.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
There is no check for whether these supposedly redundant PPS
are actually redundant. One could check via memcmp which would
work in practice* (because all content buffers are initially
zero-allocated), but this is not portable as compilers may
trash padding inside structures as they wish.
In case the PPS is not really redundant the output is garbage.
This happens with several files from the FATE-suite. E.g.
h264-conformance/CVCANLMA2_Sony_C.jsv doesn't decode correctly
any more, whereas h264-conformance/CABA3_TOSHIBA_E.264 even
fails in ff_cbs_write_packet(), because the inferred value
of num_ref_idx_l0_active_minus1 mismatches with the value set
in the slice (this happens when num_ref_idx_l0_default_active_minus1
changes in the PPS; the value in the slice header is inferred from
the original PPS's num_ref_idx_l0_default_active_minus1).
*: Unless slice_group_id is used, i.e. unless slice_group_map_type
is six.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
~4x faster than the C version.
The shuffles in the 15pt dim1 are seriously expensive. Not happy with it,
but I'm contempt.
Can be easily converted to pure AVX by removing all vpermpd/vpermps
instructions.
this maps to the vpxenc argument with the same name and the
VP9E_SET_MIN_GF_INTERVAL codec control
Signed-off-by: James Zern <jzern@google.com>
Reviewed-by: Vignesh Venkatasubramanian <vigneshv@google.com>
Add "slice" intra refresh type to h264_qsv and hevc_qsv. This type means
horizontal refresh by slices without overlapping. Also update the doc.
Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Currently AVBR is disabled and VBR is the default method if maxrate is
not specified on Linux, but AVBR is the default one if maxrate is not
specified on Windows. In order to make user experience better accross
Linux and Windows, use VBR by default on Windows if maxrate is not
specified. User need to set both avbr_accuracy and avbr_convergence to
non-zero explicitly and not to specify maxrate if AVBR is expected.
In addition, AVBR works for H264 and HEVC only in the SDK.
$ ffmpeg.exe -v verbose -f lavfi -i yuvtestsrc -vf "format=nv12" -c:v
vp9_qsv -f null -
These are the formats we want/need to use when dealing with the Intel
VAAPI decoder for 12bit 4:2:0, 12bit 4:2:2, 10bit 4:4:4 and 12bit 4:4:4
respectively.
As with the already supported Y210 and YUVX (XVUY) formats, they are
based on formats Microsoft picked as their preferred 4:2:2 and 4:4:4
video formats, and Intel ran with it.
P12 and Y212 are simply an extension of 10 bit formats to say 12 bits
will be used, with 4 unused bits instead of 6.
XV30, and XV36, as exotic as they sound, are variants of Y410 and Y412
where the alpha channel is left formally undefined. We prefer these
over the alpha versions because the hardware cannot actually do
anything with the alpha channel and respecting it is just overhead.
Y412/XV46 is a normal looking packed 4 channel format where each
channel is 16bits wide but only the 12msb are used (like P012).
Y410/XV30 packs three 10bit channels in 32bits with 2bits of alpha,
like A/X2RGB10 style formats. This annoying layout forced me to define
the BE version as a bitstream format. It seems like our pixdesc
infrastructure can handle the LE version being byte-defined, but not
when it's reversed. If there's a better way to handle this, please
let me know. Our existing X2 formats all have the 2 bits at the MSB
end, but this format places them at the LSB end and that seems to be
the root of the problem.
It has been deprecated in b4f59beeb4,
but the attribute_deprecated was not set and there was no entry
in APIchanges. This commit adds these and schedules it for removal.
Given that the reason behind the deprecation is exactly the same
as in av_fopen_utf8(), reuse its FF_API_AV_FOPEN_UTF8.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This is the alphaless version of VUYA that I introduced recently. After
further discussion and noting that the Intel vaapi driver explicitly
lists XYUV as a support format for encoding and decoding 8bit 444
content, we decided to switch our usage and avoid the overhead of
having a declared alpha channel around.
Note that I am not removing VUYA, as this turned out to have another
use, which was to replace the need for v408enc/dec when dealing with
the format.
The vaapi switching will happen in the next change
Add adaptive_i/b feature to hevc_qsv. Adaptive_i allows changing of
frame type from P and B to I. Adaptive_b allows changing of frame type
frome B to P.
Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
According to its documentation it returns "pts of the last muxed packet
+ its duration", but the value it actually returns right now is
(possibly guessed) dts after muxer-internal bitstream filtering (if
any).
This function was added for ffmpeg.c, but it is not used there anymore.
Since the value it returns is ill-defined and so inappropriate for any
serious use, deprecate it.
The present default value of 0 will render the overlay video invisible.
A default of 1.0 is consistent with most common use cases.
Signed-off-by: Fei Wang <fei.w.wang@intel.com>
Reviewed-by: Philip Langdale <philipl@overt.org>
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>