libx264 does have a field for opaque data to pass along with frames
through the encoder, but it is a pointer, while the libavcodec
reordered_opaque field is an int64_t. Therefore, allocate an array
within the libx264 wrapper, where reordered_opaque values in flight
are stored, and pass a pointer to this array to libx264.
Update the public libavcodec documentation for the AVCodecContext
field to explain this usage, and add a codec capability that allows
detecting whether an encoder handles this field.
Signed-off-by: Martin Storsjö <martin@martin.st>
The existing av_mediacodec_release_buffer allows the user to render
or discard the Surface-backed frame. This new method allows the user
to control exactly when the frame will be rendered to its SurfaceView.
Available since Android API 21.
Signed-off-by: Aman Gupta <aman@tmm1.net>
Simple parser to set keyframes, frame type, structure, width, height, and pixel
format, plus stream profile and level.
Reviewed-by: Mark Thompson <sw@jkqxz.net>
Signed-off-by: James Almer <jamrial@gmail.com>
Create a new AVPacket side data type for Active Format Description,
which mirrors the side data type found in AVFrame. The primary
use case for this is ensuring AFD gets preserved in the V210
encoder, so that the decklink libavdevice can output AFD.
Signed-off-by: Devin Heitmueller <dheitmueller@ltnglobal.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
This was reduced from 128 in libav commit
192f1984b1, but since we support unknown channel
layouts, we can increase this limit.
Fixes ticket #6332.
Signed-off-by: Marton Balint <cus@passwd.hu>
This commit implements a full ATRAC9 decoder, a simple low-delay codec
developed by Sony and used in most PSVita games, some PS3 games and some
PS4 games. Its similar to AAC in that it uses Huffman coded scalefactors
but instead of vector quantization it just Huffman codes the spectral
coefficients (in a way similar to how Opus splits band energy coding
into coarse and fine precision). It opts to write rather large Huffman
codes by packing several small coefficients into one Huffman coded
symbol, though I don't believe this increases efficiency at all.
Band extension implements SBC in a simple way, first it mirrors the
lower spectrum onto the higher frequencies and then it uses one of 5
filters to shape it. Noise substitution is implemented via 2 of them.
Unlike previous ATRAC codecs, there's no QMF, this is a standard MDCT
codec.
Based off of the reverse engineering work of Alex Barney.
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
CLI options -maxrate, -bufsize and -rc_init_occupancy can now be picked
up by the x265 wrapper. Min. rc init has to be 1001 to avoid x265
setting it to vbv-bufsize.
Unbreaks files with unknown extradata, the Canopus decoder accepts both files
with and without this extradata (24 byte "INFO", 16 byte "RDRT", rest "FIEL").
Reported-by: Peter Bubestinger
Tested-by: Piotr Bandurski
Most decoders (pgssubdec, ccaption_dec) are using -1 or UINT32_MAX for a
subtitle event which should be cleared at the next event.
Signed-off-by: Marton Balint <cus@passwd.hu>
It works as a drop in replacement for the deprecated av_dup_packet(),
to ensure a packet is reference counted.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: James Almer <jamrial@gmail.com>
This is for applications which want to explicitly check for invalid
UTF-8 manually, and take actions that are better than dropping invalid
subtitles silently. (It's pretty much silent because sporadic avcodec
error messages are so common that you can't reasonably display them in a
prominent and meaningful way in a application GUI.)
The default behavior of the mediacodec decoder before this commit
was to delay flushes until all pending hardware frames were
returned to the decoder. This was useful for certain types of
applications, but was unexpected behavior for others.
The new default behavior with this commit is now to execute
flushes immediately to invalidate all pending frames. The old
behavior can be enabled by setting delay_flush=1.
With the new behavior, video players implementing seek can simply
call flush on the decoder without having to worry about whether
they have one or more mediacodec frames still buffered in their
rendering pipeline. Previously, all these frames had to be
explictly freed (or rendered) before the seek/flush would execute.
The new behavior matches the behavior of all other lavc decoders,
reducing the amount of special casing required when using the
mediacodec decoder.
Signed-off-by: Aman Gupta <aman@tmm1.net>
Signed-off-by: Matthieu Bouron <matthieu.bouron@gmail.com>
nvenc doesn't support P016, but we have two problems today:
1) We declare support for YUV444P16 which nvenc also doesn't support.
We do this because it's the only pix_fmt we have that can
approximate nvenc's internal format that is YUV444P10 with data in
MSBs instead of LSBs. Because the declared format is a 16bit one,
it will be preferrentially chosen when encoding >10bit content,
but that content will normally be YUV420P12 or P016 which should
get mapped to P010 and not YUV444P10.
2) Transcoding P016 content with nvenc should be possible in a pure
hardware pipeline, and that can't be done if nvenc doesn't say it
accepts P016. By mapping it to P010, we can use it, albeit with
truncation. I have established that swscale doesn't know how to
dither to 10bits so we'd get truncation anyway, even if we tried
to do this 'properly'.