This will be used to allow writing file sequences using the tee output onto
multiple places in parallel
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This allows callers with avio write callbacks to get the bytestream
positions that correspond to keyframes, suitable for live streaming.
In the simplest form, a caller could expect that a header is written
to the bytestream during the avformat_write_header, and the data
output to the avio context during e.g. av_write_frame corresponds
exactly to the current packet passed in.
When combined with av_interleaved_write_frame, and with muxers that
do buffering (most muxers that do some sort of fragmenting or
clustering), the mapping from input data to bytestream positions
is nontrivial.
This allows callers to get directly information about what part
of the bytestream is what, without having to resort to assumptions
about the muxer behaviour.
One keyframe/fragment/block can still be split into multiple (if
they are larger than the aviocontext buffer), which would call
the callback with e.g. AVIO_DATA_MARKER_SYNC_POINT, followed by
AVIO_DATA_MARKER_UNKNOWN for the second time it is called with
the following data.
Signed-off-by: Martin Storsjö <martin@martin.st>
The new function behaves the same as av_log_format_line, but also forwards
the return value from the underlying snprintf call. This will allow
callers to accurately determine the size requirements for the line buffer.
Signed-off-by: Andreas Weis <github@ghulbus-inc.de>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Until now, the decoding API was restricted to outputting 0 or 1 frames
per input packet. It also enforces a somewhat rigid dataflow in general.
This new API seeks to relax these restrictions by decoupling input and
output. Instead of doing a single call on each decode step, which may
consume the packet and may produce output, the new API requires the user
to send input first, and then ask for output.
For now, there are no codecs supporting this API. The API can work with
codecs using the old API, and most code added here is to make them
interoperate. The reverse is not possible, although for audio it might.
From Libav commit 05f66706d1.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Until now, the decoding API was restricted to outputting 0 or 1 frames
per input packet. It also enforces a somewhat rigid dataflow in general.
This new API seeks to relax these restrictions by decoupling input and
output. Instead of doing a single call on each decode step, which may
consume the packet and may produce output, the new API requires the user
to send input first, and then ask for output.
For now, there are no codecs supporting this API. The API can work with
codecs using the old API, and most code added here is to make them
interoperate. The reverse is not possible, although for audio it might.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Check that the required plane pointers and only
those are set up.
Currently does not enforce anything for the palette
pointer of pseudopal formats as I am unsure about the
requirements.
Signed-off-by: Reimar Döffinger <Reimar.Doeffinger@gmx.de>
This allows to copy information related to the stream ID from the demuxer
to the muxer, thus allowing for example to retain information related to
synchronous and asynchronous KLV data packets. This information is used
in the muxer when remuxing to distinguish the two kind of packets (if the
information is lacking, data packets are considered synchronous).
The fate reference changes are due to the use of
av_packet_merge_side_data(), which increases the size of the output
packet size, since side data is merged into the packet data.
This API is intended to allow passing around codec parameters without
using full AVCodecContext (which also contains codec options and
encoder/decoder state).