It is supposed to be a flag. The only currently defined value is
AVIO_SEEKABLE_NORMAL, but other ones may be added in the future.
However all the current lavf code treats this field as a bool (mainly
for historical reasons).
Change all those cases to properly check for AVIO_SEEKABLE_NORMAL.
Streaming servers appear to ignore all other language metadata.
Signed-off-by: Jan Ekström <jeebjp@gmail.com>
Signed-off-by: Josh de Kock <josh@itanimul.li>
This way, in case of bit rate not being set, max_bitrate will be
used instead. This enables, for example, re-using max_bitrate
information from the input or doing transcoding with a rate
control mode that is not bit rate based.
Signed-off-by: Jan Ekström <jeebjp@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Sometimes it's useful to be able to define the exact track numbers in
the generated track, instead of always beginning at track id 1. Using
the option use_stream_ids_as_track_ids now copies the use stream ids
to track ids. Dynamically generated tracks (ie. tmcd) have their track
numbers defined as continuing from the highest numbered stream id.
Signed-off-by: Erkki Seppälä <erkki.seppala.ext@nokia.com>
Signed-off-by: OZOPlayer <OZOPL@nokia.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
It's a small and simple function that can be inlined.
This removes one private symbol and should reduce object dependencies with the next
major bump
Signed-off-by: James Almer <jamrial@gmail.com>
When writing a fragmented file, we by default write an index pointing
to all the fragments at the end of the file. This causes constantly
increasing memory usage during the muxing. For live streams, the
index might not be useful at all.
A similar fragment index is written (but at the start of the file) if
the global_sidx flag is set. If ism_lookahead is set, we need to keep
data about the last ism_lookahead+1 fragments.
If no fragment index is to be written, we don't need to store information
about all fragments, avoiding increasing the memory consumption
linearly with the muxing runtime.
This fixes out of memory situations with long live mp4 streams.
Signed-off-by: Martin Storsjö <martin@martin.st>
Add -movflags use_metadata_tags to the mov muxer. This will cause
the muxer to write all metadata to the file in the keys and mtda
atoms.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The second one is not explicitly needed, as res is not reset, but it is there
for consistency.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Marton Balint <cus@passwd.hu>
This was missed in e1eb0fc960, when ff_interleaved_peek was
changed to include const during the evolution of the patch.
Signed-off-by: Martin Storsjö <martin@martin.st>
As long as caller only writes packets using av_interleaved_write_frame
with no manual flushing, this should allow us to always have accurate
durations at the end of fragments, since there should be at least
one queued packet in each stream (except for the stream where the
current packet is being written, but if the muxer itself does the
cutting of fragments, it also has info about the next packet for that
stream).
Signed-off-by: Martin Storsjö <martin@martin.st>
This codepath isn't quite as bad as it used to sound, if fragments
are cut automatically at video packets.
Signed-off-by: Martin Storsjö <martin@martin.st>
https://developer.apple.com/library/mac/technotes/tn2174/_index.html
- Enabled creation of timecode tracks for MP4 in the same way as MOV.
- Used nmhd as media information header of timecode track of MP4 instead
of gmhd used in MOV, thus avoiding tcmi also, as recommended above.
- Bypassed adding source reference field for MP4, as suggested above.
Issue: https://trac.ffmpeg.org/ticket/4704
Signed-off-by: Syed Andaleeb Roomy <andaleebcse@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Currently, AVStream contains an embedded AVCodecContext instance, which
is used by demuxers to export stream parameters to the caller and by
muxers to receive stream parameters from the caller. It is also used
internally as the codec context that is passed to parsers.
In addition, it is also widely used by the callers as the decoding (when
demuxer) or encoding (when muxing) context, though this has been
officially discouraged since Libav 11.
There are multiple important problems with this approach:
- the fields in AVCodecContext are in general one of
* stream parameters
* codec options
* codec state
However, it's not clear which ones are which. It is consequently
unclear which fields are a demuxer allowed to set or a muxer allowed to
read. This leads to erratic behaviour depending on whether decoding or
encoding is being performed or not (and whether it uses the AVStream
embedded codec context).
- various synchronization issues arising from the fact that the same
context is used by several different APIs (muxers/demuxers,
parsers, bitstream filters and encoders/decoders) simultaneously, with
there being no clear rules for who can modify what and the different
processes being typically delayed with respect to each other.
- avformat_find_stream_info() making it necessary to support opening
and closing a single codec context multiple times, thus
complicating the semantics of freeing various allocated objects in the
codec context.
Those problems are resolved by replacing the AVStream embedded codec
context with a newly added AVCodecParameters instance, which stores only
the stream parameters exported by the demuxers or read by the muxers.
Some (de)muxers open additional files beyond the main IO context.
Currently, they call avio_open() directly, which prevents the caller
from using custom IO for such streams.
This commit adds callbacks to AVFormatContext that default to
avio_open2()/avio_close(), but can be overridden by the caller. All
muxers and demuxers using AVIO are switched to using those callbacks
instead of calling avio_open()/avio_close() directly.
(de)muxers that use the URLProtocol layer directly instead of AVIO
remain unconverted for now. This should be fixed in later commits.