When writing a fragmented file, we by default write an index pointing
to all the fragments at the end of the file. This causes constantly
increasing memory usage during the muxing. For live streams, the
index might not be useful at all.
A similar fragment index is written (but at the start of the file) if
the global_sidx flag is set. If ism_lookahead is set, we need to keep
data about the last ism_lookahead+1 fragments.
If no fragment index is to be written, we don't need to store information
about all fragments, avoiding increasing the memory consumption
linearly with the muxing runtime.
This fixes out of memory situations with long live mp4 streams.
Signed-off-by: Martin Storsjö <martin@martin.st>
Add -movflags use_metadata_tags to the mov muxer. This will cause
the muxer to write all metadata to the file in the keys and mtda
atoms.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The second one is not explicitly needed, as res is not reset, but it is there
for consistency.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Marton Balint <cus@passwd.hu>
This was missed in e1eb0fc960, when ff_interleaved_peek was
changed to include const during the evolution of the patch.
Signed-off-by: Martin Storsjö <martin@martin.st>
As long as caller only writes packets using av_interleaved_write_frame
with no manual flushing, this should allow us to always have accurate
durations at the end of fragments, since there should be at least
one queued packet in each stream (except for the stream where the
current packet is being written, but if the muxer itself does the
cutting of fragments, it also has info about the next packet for that
stream).
Signed-off-by: Martin Storsjö <martin@martin.st>
This codepath isn't quite as bad as it used to sound, if fragments
are cut automatically at video packets.
Signed-off-by: Martin Storsjö <martin@martin.st>
https://developer.apple.com/library/mac/technotes/tn2174/_index.html
- Enabled creation of timecode tracks for MP4 in the same way as MOV.
- Used nmhd as media information header of timecode track of MP4 instead
of gmhd used in MOV, thus avoiding tcmi also, as recommended above.
- Bypassed adding source reference field for MP4, as suggested above.
Issue: https://trac.ffmpeg.org/ticket/4704
Signed-off-by: Syed Andaleeb Roomy <andaleebcse@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Currently, AVStream contains an embedded AVCodecContext instance, which
is used by demuxers to export stream parameters to the caller and by
muxers to receive stream parameters from the caller. It is also used
internally as the codec context that is passed to parsers.
In addition, it is also widely used by the callers as the decoding (when
demuxer) or encoding (when muxing) context, though this has been
officially discouraged since Libav 11.
There are multiple important problems with this approach:
- the fields in AVCodecContext are in general one of
* stream parameters
* codec options
* codec state
However, it's not clear which ones are which. It is consequently
unclear which fields are a demuxer allowed to set or a muxer allowed to
read. This leads to erratic behaviour depending on whether decoding or
encoding is being performed or not (and whether it uses the AVStream
embedded codec context).
- various synchronization issues arising from the fact that the same
context is used by several different APIs (muxers/demuxers,
parsers, bitstream filters and encoders/decoders) simultaneously, with
there being no clear rules for who can modify what and the different
processes being typically delayed with respect to each other.
- avformat_find_stream_info() making it necessary to support opening
and closing a single codec context multiple times, thus
complicating the semantics of freeing various allocated objects in the
codec context.
Those problems are resolved by replacing the AVStream embedded codec
context with a newly added AVCodecParameters instance, which stores only
the stream parameters exported by the demuxers or read by the muxers.
Some (de)muxers open additional files beyond the main IO context.
Currently, they call avio_open() directly, which prevents the caller
from using custom IO for such streams.
This commit adds callbacks to AVFormatContext that default to
avio_open2()/avio_close(), but can be overridden by the caller. All
muxers and demuxers using AVIO are switched to using those callbacks
instead of calling avio_open()/avio_close() directly.
(de)muxers that use the URLProtocol layer directly instead of AVIO
remain unconverted for now. This should be fixed in later commits.
support writing encrypted mp4 using aes-ctr, conforming to ISO/IEC
23001-7.
3 new parameters were added:
- encryption_scheme - allowed values are none (default) and cenc-aes-ctr
- encryption_key - 128 bit encryption key (hex)
- encryption_kid - 128 bit encryption key identifier (hex)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This feature allows making associations between audio tracks
that apple players recognize. E.g. when an ac3 track has a
tref that points to an aac track, devices that don't support
ac3 will automatically fall back to the aac track.
Apple used to *guess* these associations, but new products
(AppleTV 4) no longer guess and this association can only
be made explicitly now using the "fall" tref.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
By writing a zero-sized packet, the caller can communicate the
start_dts/start_cts for the stream without actually writing
the first packet.
This allows doing random-access writing of fragments when the
start dts of the stream isn't zero, so that the edit list in the moov
is written based on timestamps from the nominal start time signaled
via the zero-sized packet, while the first proper packet written
corresponds to a later fragment.
To avoid potential unexpected behaviour, empty packets only set
start_dts if the frag_discont flag is set.
Signed-off-by: Martin Storsjö <martin@martin.st>
This allows producing fragments discontinously where the video
stream has b-frames (but starts at pts=0), but doesn't work for the
cases with audio with preroll.
Signed-off-by: Martin Storsjö <martin@martin.st>
In most other cases when writing fragmented mp4 files, the output
IO context is flushed after each fragment. Also flush it after
writing the initial moov, to have it behave in the same way.
Signed-off-by: Martin Storsjö <martin@martin.st>
This also makes sure that a fragmented file without the empty_moov
flag (i.e. with a non-empty initial moov fragment) actually gets
written, if some of the tracks turn out to not have any samples.
Signed-off-by: Martin Storsjö <martin@martin.st>
avpriv_ac3_parse_header was removed in commit 3dfb643.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Andreas Cadhalpun <Andreas.Cadhalpun@googlemail.com>
The double meaning of the faststart flag (moving a moov atom
to the start of files, making them streamable, for non-fragmented
files, vs inserting a global sidx index at the start of files
for fragmented files) is confusing - see 40ed1cbf1 for
explanation of its origins.
Since the second meaning of the flag hasn't been part of any
libav release yet, just rename it to get rid of the confusion
without any extra deprecation (which wouldn't get rid of the
potential confusion, of users adding -movflags faststart
even for fragmented files, where it isn't needed for making
them "streamable").
This gets back the old behaviour, where -movflags faststart
doesn't have any effect for fragmented files.
Signed-off-by: Martin Storsjö <martin@martin.st>
For fragmented files with non-empty moov, with a fragment index
(sidx), place the index after the initial moov/mdat pair.
Previously, for this pathological case, the index was written
at the start of the file.
Signed-off-by: Martin Storsjö <martin@martin.st>