Store the file duration in the same timebase it arrives (i.e.
milliseconds) and only convert it to the file duration units (100ns)
when it's actually written, thus simplifying some calculations. Also,
store the duration as unsigned, since it cannot be negative.
CC: libav-stable@libav.org
Bug-ID: CVE-2016-2326
Adding early support for a subset of the proposed colour elements
according to the latest version of spec:
https://mailarchive.ietf.org/arch/search/?email_list=cellar&gbt=1&index=hIKLhMdgTMTEwUTeA4ct38h0tmE
I've left out elements for pix_fmt related things as there still
seems to be some discussion around these, and the max_cll/max_fall
are currently not propagated as there is not yet side data for them.
The new elements are exposed under strict experimental mode.
Signed-off-by: Neil Birkbeck <neil.birkbeck@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This can be used for formats which write all format metadata as string to
files, therefore non-standard creation times such as 'now' will be parsed.
The standardized creation time is UTC ISO 8601 with microsecond precision.
Reviewed-by: wm4 <nfxjfg@googlemail.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
This also fixes reading gapless metadata when the entries do not start with the
mean atom. Such samples can be found here:
https://hydrogenaud.io/index.php/topic,93310.0.html
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Marton Balint <cus@passwd.hu>
https://developer.apple.com/library/mac/technotes/tn2174/_index.html
- Enabled creation of timecode tracks for MP4 in the same way as MOV.
- Used nmhd as media information header of timecode track of MP4 instead
of gmhd used in MOV, thus avoiding tcmi also, as recommended above.
- Bypassed adding source reference field for MP4, as suggested above.
Issue: https://trac.ffmpeg.org/ticket/4704
Signed-off-by: Syed Andaleeb Roomy <andaleebcse@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This broke packed_maindata.mp3.mp4
Its unknown to me what this commit would have fixed
Reviewed-by: James Almer <jamrial@gmail.com>
This reverts commit 79127dbbef, reversing
changes made to 9fad1ce7c9.
This allows to copy information related to the stream ID from the demuxer
to the muxer, thus allowing for example to retain information related to
synchronous and asynchronous KLV data packets. This information is used
in the muxer when remuxing to distinguish the two kind of packets (if the
information is lacking, data packets are considered synchronous).
The fate reference changes are due to the use of
av_packet_merge_side_data(), which increases the size of the output
packet size, since side data is merged into the packet data.
Currently, AVStream contains an embedded AVCodecContext instance, which
is used by demuxers to export stream parameters to the caller and by
muxers to receive stream parameters from the caller. It is also used
internally as the codec context that is passed to parsers.
In addition, it is also widely used by the callers as the decoding (when
demuxer) or encoding (when muxing) context, though this has been
officially discouraged since Libav 11.
There are multiple important problems with this approach:
- the fields in AVCodecContext are in general one of
* stream parameters
* codec options
* codec state
However, it's not clear which ones are which. It is consequently
unclear which fields are a demuxer allowed to set or a muxer allowed to
read. This leads to erratic behaviour depending on whether decoding or
encoding is being performed or not (and whether it uses the AVStream
embedded codec context).
- various synchronization issues arising from the fact that the same
context is used by several different APIs (muxers/demuxers,
parsers, bitstream filters and encoders/decoders) simultaneously, with
there being no clear rules for who can modify what and the different
processes being typically delayed with respect to each other.
- avformat_find_stream_info() making it necessary to support opening
and closing a single codec context multiple times, thus
complicating the semantics of freeing various allocated objects in the
codec context.
Those problems are resolved by replacing the AVStream embedded codec
context with a newly added AVCodecParameters instance, which stores only
the stream parameters exported by the demuxers or read by the muxers.