Until now, the decoding API was restricted to outputting 0 or 1 frames
per input packet. It also enforces a somewhat rigid dataflow in general.
This new API seeks to relax these restrictions by decoupling input and
output. Instead of doing a single call on each decode step, which may
consume the packet and may produce output, the new API requires the user
to send input first, and then ask for output.
For now, there are no codecs supporting this API. The API can work with
codecs using the old API, and most code added here is to make them
interoperate. The reverse is not possible, although for audio it might.
From Libav commit 05f66706d1.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Until now, the decoding API was restricted to outputting 0 or 1 frames
per input packet. It also enforces a somewhat rigid dataflow in general.
This new API seeks to relax these restrictions by decoupling input and
output. Instead of doing a single call on each decode step, which may
consume the packet and may produce output, the new API requires the user
to send input first, and then ask for output.
For now, there are no codecs supporting this API. The API can work with
codecs using the old API, and most code added here is to make them
interoperate. The reverse is not possible, although for audio it might.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Check that the required plane pointers and only
those are set up.
Currently does not enforce anything for the palette
pointer of pseudopal formats as I am unsure about the
requirements.
Signed-off-by: Reimar Döffinger <Reimar.Doeffinger@gmx.de>
This allows to copy information related to the stream ID from the demuxer
to the muxer, thus allowing for example to retain information related to
synchronous and asynchronous KLV data packets. This information is used
in the muxer when remuxing to distinguish the two kind of packets (if the
information is lacking, data packets are considered synchronous).
The fate reference changes are due to the use of
av_packet_merge_side_data(), which increases the size of the output
packet size, since side data is merged into the packet data.
This API is intended to allow passing around codec parameters without
using full AVCodecContext (which also contains codec options and
encoder/decoder state).
Note to maintainers: update tools
Note to maintainers: set a default whitelist for your protocol
If that makes no sense then consider to set "none" and thus require the user to specify a white-list
for sub-protocols to be opened
Note, testing and checking for missing changes is needed
Reviewed-by: Andreas Cadhalpun <andreas.cadhalpun@googlemail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Some (de)muxers open additional files beyond the main IO context.
Currently, they call avio_open() directly, which prevents the caller
from using custom IO for such streams.
This commit adds callbacks to AVFormatContext that default to
avio_open2()/avio_close(), but can be overridden by the caller. All
muxers and demuxers using AVIO are switched to using those callbacks
instead of calling avio_open()/avio_close() directly.
(de)muxers that use the URLProtocol layer directly instead of AVIO
remain unconverted for now. This should be fixed in later commits.
This solves the problem discussed in https://ffmpeg.org/pipermail/ffmpeg-devel/2015-September/179238.html
by allowing AVCodec::write_header to be delayed until after packets have been
run through required bitstream filters in order to generate global extradata.
It also provides a mechanism by which a muxer can add a bitstream filter to a
stream automatically, rather than prompting the user to do so.
Applications are not supposed to mess with links,
they should close the sinks.
Furthermore, this function does not distinguish what end
of the link caused the close and does not have a timestamp.
This function returns the encoded data of a frame, one slice at a time
directly when that slice is encoded, instead of waiting for the full
frame to be done. However this field has a debatable usefulness, since
it looks like it is just a convoluted way to get data at lowest
possible latency, or a somewhat hacky way to store h263 in RFC-2190
rtp encapsulation.
Moreover when multi-threading is enabled (which is by default) the order
of returned slices is not deterministic at all, making the use of this
function not reliable at all (or at the very least, more complicated
than it should be).
So, for the reasons stated above, and being used by only a single encoder
family (mpegvideo), this field is deemed unnecessary, overcomplicated,
and not really belonging to libavcodec. Libavformat features a complete
implementation of RFC-2190, for any other case.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
This side data type is meant to be added to AVStream side data.
A fallback track indicates an alternate track to use when the
current track can not be decoded for some reason. e.g. no
decoder available for codec.
Signed-off-by: Anton Khirnov <anton@khirnov.net>