It is supposed to be a flag. The only currently defined value is
AVIO_SEEKABLE_NORMAL, but other ones may be added in the future.
However all the current lavf code treats this field as a bool (mainly
for historical reasons).
Change all those cases to properly check for AVIO_SEEKABLE_NORMAL.
When writing a fragmented file, we by default write an index pointing
to all the fragments at the end of the file. This causes constantly
increasing memory usage during the muxing. For live streams, the
index might not be useful at all.
A similar fragment index is written (but at the start of the file) if
the global_sidx flag is set. If ism_lookahead is set, we need to keep
data about the last ism_lookahead+1 fragments.
If no fragment index is to be written, we don't need to store information
about all fragments, avoiding increasing the memory consumption
linearly with the muxing runtime.
This fixes out of memory situations with long live mp4 streams.
Signed-off-by: Martin Storsjö <martin@martin.st>
This was missed in e1eb0fc960, when ff_interleaved_peek was
changed to include const during the evolution of the patch.
Signed-off-by: Martin Storsjö <martin@martin.st>
As long as caller only writes packets using av_interleaved_write_frame
with no manual flushing, this should allow us to always have accurate
durations at the end of fragments, since there should be at least
one queued packet in each stream (except for the stream where the
current packet is being written, but if the muxer itself does the
cutting of fragments, it also has info about the next packet for that
stream).
Signed-off-by: Martin Storsjö <martin@martin.st>
This codepath isn't quite as bad as it used to sound, if fragments
are cut automatically at video packets.
Signed-off-by: Martin Storsjö <martin@martin.st>
Currently, AVStream contains an embedded AVCodecContext instance, which
is used by demuxers to export stream parameters to the caller and by
muxers to receive stream parameters from the caller. It is also used
internally as the codec context that is passed to parsers.
In addition, it is also widely used by the callers as the decoding (when
demuxer) or encoding (when muxing) context, though this has been
officially discouraged since Libav 11.
There are multiple important problems with this approach:
- the fields in AVCodecContext are in general one of
* stream parameters
* codec options
* codec state
However, it's not clear which ones are which. It is consequently
unclear which fields are a demuxer allowed to set or a muxer allowed to
read. This leads to erratic behaviour depending on whether decoding or
encoding is being performed or not (and whether it uses the AVStream
embedded codec context).
- various synchronization issues arising from the fact that the same
context is used by several different APIs (muxers/demuxers,
parsers, bitstream filters and encoders/decoders) simultaneously, with
there being no clear rules for who can modify what and the different
processes being typically delayed with respect to each other.
- avformat_find_stream_info() making it necessary to support opening
and closing a single codec context multiple times, thus
complicating the semantics of freeing various allocated objects in the
codec context.
Those problems are resolved by replacing the AVStream embedded codec
context with a newly added AVCodecParameters instance, which stores only
the stream parameters exported by the demuxers or read by the muxers.
Some (de)muxers open additional files beyond the main IO context.
Currently, they call avio_open() directly, which prevents the caller
from using custom IO for such streams.
This commit adds callbacks to AVFormatContext that default to
avio_open2()/avio_close(), but can be overridden by the caller. All
muxers and demuxers using AVIO are switched to using those callbacks
instead of calling avio_open()/avio_close() directly.
(de)muxers that use the URLProtocol layer directly instead of AVIO
remain unconverted for now. This should be fixed in later commits.
This feature allows making associations between audio tracks
that apple players recognize. E.g. when an ac3 track has a
tref that points to an aac track, devices that don't support
ac3 will automatically fall back to the aac track.
Apple used to *guess* these associations, but new products
(AppleTV 4) no longer guess and this association can only
be made explicitly now using the "fall" tref.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
By writing a zero-sized packet, the caller can communicate the
start_dts/start_cts for the stream without actually writing
the first packet.
This allows doing random-access writing of fragments when the
start dts of the stream isn't zero, so that the edit list in the moov
is written based on timestamps from the nominal start time signaled
via the zero-sized packet, while the first proper packet written
corresponds to a later fragment.
To avoid potential unexpected behaviour, empty packets only set
start_dts if the frag_discont flag is set.
Signed-off-by: Martin Storsjö <martin@martin.st>
This allows producing fragments discontinously where the video
stream has b-frames (but starts at pts=0), but doesn't work for the
cases with audio with preroll.
Signed-off-by: Martin Storsjö <martin@martin.st>
In most other cases when writing fragmented mp4 files, the output
IO context is flushed after each fragment. Also flush it after
writing the initial moov, to have it behave in the same way.
Signed-off-by: Martin Storsjö <martin@martin.st>
This also makes sure that a fragmented file without the empty_moov
flag (i.e. with a non-empty initial moov fragment) actually gets
written, if some of the tracks turn out to not have any samples.
Signed-off-by: Martin Storsjö <martin@martin.st>
The double meaning of the faststart flag (moving a moov atom
to the start of files, making them streamable, for non-fragmented
files, vs inserting a global sidx index at the start of files
for fragmented files) is confusing - see 40ed1cbf1 for
explanation of its origins.
Since the second meaning of the flag hasn't been part of any
libav release yet, just rename it to get rid of the confusion
without any extra deprecation (which wouldn't get rid of the
potential confusion, of users adding -movflags faststart
even for fragmented files, where it isn't needed for making
them "streamable").
This gets back the old behaviour, where -movflags faststart
doesn't have any effect for fragmented files.
Signed-off-by: Martin Storsjö <martin@martin.st>
For fragmented files with non-empty moov, with a fragment index
(sidx), place the index after the initial moov/mdat pair.
Previously, for this pathological case, the index was written
at the start of the file.
Signed-off-by: Martin Storsjö <martin@martin.st>
The same field is also used for writing the sidx index header,
for fragmented files, when the faststart flag is used.
Signed-off-by: Martin Storsjö <martin@martin.st>
This fixes crashes with pathological cases when trying to write
a sidx index (via the -movflags faststart option, in combination
with fragmenting options), when no fragments actually have been
written. (This is possible if the empty_moov flag isn't used,
so that all actual packet data is written in the moov/mdat pair,
and no moof/mdat pairs have been written.)
In these pathological cases, no sidx should be written at all.
Signed-off-by: Martin Storsjö <martin@martin.st>
display_matrix_size is only initialized when av_stream_get_side_data()
returns a side data pointer. The code is safe since the only effect this
has is setting the display_matrix pointer to NULL which it was already
anyway.
These are essential allowing QuickTime to keep detecting content
as slow-motion - this allows preserving them on stream copy.
Signed-off-by: Martin Storsjö <martin@martin.st>
For strict CFR, they should be pretty much equal, but if the stream
is VFR, there can be a sometimes significant difference.
Calculate the pts duration separately, used in sidx atoms and for
tfrf/tfxd boxes in smooth streaming ismv files.
Also make sure to reduce the duration of sidx entries according to
edit lists.
Signed-off-by: Martin Storsjö <martin@martin.st>
Adjusting it is only necessary when a sidx/tfrf/tfxd atom already has
been written for the previous fragment (since the sidx/tfrf/tfxd atoms
include the duration between the first pts of the previous fragment, to
the first pts of the new fragment).
Signed-off-by: Martin Storsjö <martin@martin.st>
When automatically flushing fragments based on set conditions
(fragmentation on keyframes, after some interval or byte size),
we already have the next packet for one stream - use this for setting
the duration of the last packet in the flushed fragment correctly.
This avoids having to adjust the timestamp of the first packet in
the new fragment since the last duration was unknown.
Unfortunately, this only works for automatic flushing (not for
caller-triggered flushing, like in the dash muxer), and only for the
one single track that triggered the flushing. The duration of the
last sample in all other tracks still is dependent on AVPacket
duration (or heuristics).
Signed-off-by: Martin Storsjö <martin@martin.st>
Even if this is a guess, it is way better than writing a zero duration
of the last sample in a fragment (because if the duration is zero,
the first sample of the next fragment will have the same timestamp
as the last sample in the previous one).
Since we normally don't require libavformat muxer users to set
the duration field in AVPacket, we probably can't strictly require
it here either, so don't log this as a strict warning, only as info.
Signed-off-by: Martin Storsjö <martin@martin.st>
This is incompatible with the omit_tfhd_offset flag (writing
position independent fragments with interleaving requires the
default_base_moof flag).
This makes the moof atoms slightly bigger, but can be better for
playback (improving locality of sample data in the mdat).
Signed-off-by: Martin Storsjö <martin@martin.st>
This is needed if all the data for one track isn't continuous
within the mdat. Normally we make sure all the data for one
track is continuous, but in new cases we will need to have
the samples interleaved.
Signed-off-by: Martin Storsjö <martin@martin.st>
This way, the caller doesn't need to coordinate setting the option
after the moov atom has been written. The downside is that it is
no longer possible to use the option for checking whether the moov
atom already has been written, but a caller is able to keep track
of that by other means anyway.
Signed-off-by: Martin Storsjö <martin@martin.st>
The previous use of the mov->fragments field, for determining whether
written packets were part of the first fragment or not, didn't
work as intended when using the empty_moov flag.
Signed-off-by: Martin Storsjö <martin@martin.st>
This avoids assuming that e.g. audio samples are marked as
sync samples.
This allows omitting the sample flags from trun, if the default
flags happen to be right for all the samples.
Signed-off-by: Martin Storsjö <martin@martin.st>
a876585215 had the unintended side effect of returning AVERROR(ENOMEM)
when track->entry is zero, while the code intentionally wants to
continue in that case.
Signed-off-by: Martin Storsjö <martin@martin.st>
Use the more generic approach with the delay_moov flag, instead of
having a update mechanism specific to this one single atom.
Signed-off-by: Martin Storsjö <martin@martin.st>
This delays writing the moov until the first fragment is written,
or can be flushed by the caller explicitly when wanted. If the first
sample in all streams is available at this point, we can write
a proper editlist at this point, allowing streams to start at
something else than dts=0. For AC3 and DNXHD, a packet is
needed in order to write the moov header properly.
This isn't added to the normal behaviour for empty_moov, since
the behaviour that ftyp+moov is written during avformat_write_header
would be changed. Callers that split the output stream into header+segments
(either by flushing manually, with the custom_frag flag set, or by
just differentiating between data written during avformat_write_header
and the rest) will need to be adjusted to take this option into use.
For handling streams that start at something else than dts=0, an
alternative would be to use different kinds of heuristics for
guessing the start dts (using AVCodecContext delay or has_b_frames
together with the frame rate), but this is not reliable and doesn't
necessarily work well with stream copy, and wouldn't work for getting
the right initialization data for AC3 or DNXHD either.
Signed-off-by: Martin Storsjö <martin@martin.st>