Main use-case is proxying avio through a foreign I/O layer and a custom
AVIO context, without losing latency and performance characteristics.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
This disables everything that was deprecated at least 18 months ago.
Readjust the minimum API version as needed, postponing any
API-incompatible changes until the next bump.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
Use webm muxer for VP8, VP9 and Opus codec, mp4 muxer otherwise.
Signed-off-by: Peter Große <pegro@friiks.de>
Signed-off-by: Martin Storsjö <martin@martin.st>
This implements Spherical Video V1 and V2, as described in the
spatial-media collection by Google.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
Functionally similar to av_packet_add_side_data(). Allows the use of an
already allocated buffer as stream side data.
Signed-off-by: James Almer <jamrial@gmail.com>
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
The spec says
9: Interlaced with bottom field displayed first and top field stored first
14: Interlaced with top field displayed first and bottom field stored first
And avcodec.h states
AV_FIELD_TB, //< Top coded first, bottom displayed first
AV_FIELD_BT, //< Bottom coded first, top displayed first
Signed-off-by: James Almer <jamrial@gmail.com>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
As long as caller only writes packets using av_interleaved_write_frame
with no manual flushing, this should allow us to always have accurate
durations at the end of fragments, since there should be at least
one queued packet in each stream (except for the stream where the
current packet is being written, but if the muxer itself does the
cutting of fragments, it also has info about the next packet for that
stream).
Signed-off-by: Martin Storsjö <martin@martin.st>
This allows callers with avio write callbacks to get the bytestream
positions that correspond to keyframes, suitable for live streaming.
In the simplest form, a caller could expect that a header is written
to the bytestream during the avformat_write_header, and the data
output to the avio context during e.g. av_write_frame corresponds
exactly to the current packet passed in.
When combined with av_interleaved_write_frame, and with muxers that
do buffering (most muxers that do some sort of fragmenting or
clustering), the mapping from input data to bytestream positions
is nontrivial.
This allows callers to get directly information about what part
of the bytestream is what, without having to resort to assumptions
about the muxer behaviour.
One keyframe/fragment/block can still be split into multiple (if
they are larger than the aviocontext buffer), which would call
the callback with e.g. AVIO_DATA_MARKER_SYNC_POINT, followed by
AVIO_DATA_MARKER_UNKNOWN for the second time it is called with
the following data.
Signed-off-by: Martin Storsjö <martin@martin.st>
Using this requires setting the rw_timeout option to make it
terminate, alternatively using the interrupt callback (if used via
the API).
Signed-off-by: Martin Storsjö <martin@martin.st>
If set non-zero, this limits duration of the retry_transfer_wrapper()
loop, thus affecting ffurl_read*(), ffurl_write(). As soon as
one single byte is successfully received/transmitted, the timer
restarts.
This has further changes by Michael Niedermayer and Martin Storsjö.
Signed-off-by: Martin Storsjö <martin@martin.st>
Currently, AVStream contains an embedded AVCodecContext instance, which
is used by demuxers to export stream parameters to the caller and by
muxers to receive stream parameters from the caller. It is also used
internally as the codec context that is passed to parsers.
In addition, it is also widely used by the callers as the decoding (when
demuxer) or encoding (when muxing) context, though this has been
officially discouraged since Libav 11.
There are multiple important problems with this approach:
- the fields in AVCodecContext are in general one of
* stream parameters
* codec options
* codec state
However, it's not clear which ones are which. It is consequently
unclear which fields are a demuxer allowed to set or a muxer allowed to
read. This leads to erratic behaviour depending on whether decoding or
encoding is being performed or not (and whether it uses the AVStream
embedded codec context).
- various synchronization issues arising from the fact that the same
context is used by several different APIs (muxers/demuxers,
parsers, bitstream filters and encoders/decoders) simultaneously, with
there being no clear rules for who can modify what and the different
processes being typically delayed with respect to each other.
- avformat_find_stream_info() making it necessary to support opening
and closing a single codec context multiple times, thus
complicating the semantics of freeing various allocated objects in the
codec context.
Those problems are resolved by replacing the AVStream embedded codec
context with a newly added AVCodecParameters instance, which stores only
the stream parameters exported by the demuxers or read by the muxers.
This feature is mostly only used by NLE software, and is
both of dubious value being enabled by default, and a
possible security risk.
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
All encoders set pts and dts properly now (and have been doing that for
a while), so there is no good reason to do any timestamp guessing in the
muxer.
The newly added AVStreamInternal will be later used for storing all the
private fields currently living in AVStream.
The old one is the result of the reverse engineering and guesswork.
The new one has been written following the now-available specification.
This work is part of Outreach Program for Women Summer 2014 activities
for the Libav project.
The fate references had to be changed because the old demuxer truncates
the last frame in some cases, the new one handles it properly.
The seek-test reference is changed because seeking works differently
in the new demuxer. When seeking, the packet is not read from the stream
directly, but it is rather constructed by the demuxer. That is why
position is -1 now in the reference.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
The current behavior may produce a different sequence of packets
after seeking, compared to demuxing linearly from the beginning.
This is because the MOV demuxer seeks in each stream individually,
based on timestamp, which may set each stream at a slightly different
position than if the file would have been read sequentially.
This makes implementing certain operations, such as segmenting,
quite hard, and slower than need be.
Therefore, add an option which retains the same packet sequence
after seeking, as when a file is demuxed linearly.
Similarly to what has been done for MOV, display XMP metadata only when
users explicitly require it.
The Extensible Metadata Platform tag can contain various kind of data
which are not strictly related to the video file, such as history of
edits and saves from the project file.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
This delays writing the moov until the first fragment is written,
or can be flushed by the caller explicitly when wanted. If the first
sample in all streams is available at this point, we can write
a proper editlist at this point, allowing streams to start at
something else than dts=0. For AC3 and DNXHD, a packet is
needed in order to write the moov header properly.
This isn't added to the normal behaviour for empty_moov, since
the behaviour that ftyp+moov is written during avformat_write_header
would be changed. Callers that split the output stream into header+segments
(either by flushing manually, with the custom_frag flag set, or by
just differentiating between data written during avformat_write_header
and the rest) will need to be adjusted to take this option into use.
For handling streams that start at something else than dts=0, an
alternative would be to use different kinds of heuristics for
guessing the start dts (using AVCodecContext delay or has_b_frames
together with the frame rate), but this is not reliable and doesn't
necessarily work well with stream copy, and wouldn't work for getting
the right initialization data for AC3 or DNXHD either.
Signed-off-by: Martin Storsjö <martin@martin.st>
Since this structurally is quite different from normal RTP
(multiple streams are muxed into one single mpegts stream,
which is packetized into one single RTP session), it is kept
as a separate muxer.
Since this structurally also behaves differently than normal
RTP, all of the other muxers that do chained RTP muxing
(rtsp, sap, mp4) would need to be updated similarly to handle
this - in particular, creating one single rtp_mpegts muxer
for the whole presentation instead of one rtp muxer per stream.
Signed-off-by: Martin Storsjö <martin@martin.st>
The packetizer only supports splitting at GOB headers - if
such aren't available frequently enough, it splits at any
random byte offset (not at a macroblock boundary either, which
would be allowed by the spec) and sends a payload header pretend
that it starts with a GOB header.
As long as a receiver doesn't try to handle such cases cleverly
but just drops broken frames, this shouldn't matter too much
in practice.
Signed-off-by: Martin Storsjö <martin@martin.st>
The Extensible Metadata Platform tag can contain various kind of data
which are not strictly related to the video file, such as history of edits
and saves from the project file. So display XMP metadata only when the
user explicitly requires it.
Based on a patch by Marek Fort <marek.fort@chyronhego.com>.