Currently, AVStream contains an embedded AVCodecContext instance, which
is used by demuxers to export stream parameters to the caller and by
muxers to receive stream parameters from the caller. It is also used
internally as the codec context that is passed to parsers.
In addition, it is also widely used by the callers as the decoding (when
demuxer) or encoding (when muxing) context, though this has been
officially discouraged since Libav 11.
There are multiple important problems with this approach:
- the fields in AVCodecContext are in general one of
* stream parameters
* codec options
* codec state
However, it's not clear which ones are which. It is consequently
unclear which fields are a demuxer allowed to set or a muxer allowed to
read. This leads to erratic behaviour depending on whether decoding or
encoding is being performed or not (and whether it uses the AVStream
embedded codec context).
- various synchronization issues arising from the fact that the same
context is used by several different APIs (muxers/demuxers,
parsers, bitstream filters and encoders/decoders) simultaneously, with
there being no clear rules for who can modify what and the different
processes being typically delayed with respect to each other.
- avformat_find_stream_info() making it necessary to support opening
and closing a single codec context multiple times, thus
complicating the semantics of freeing various allocated objects in the
codec context.
Those problems are resolved by replacing the AVStream embedded codec
context with a newly added AVCodecParameters instance, which stores only
the stream parameters exported by the demuxers or read by the muxers.
Some muxer use the FLV field PreviousTagSize to be the sum of tag
length. Without this change, the flv demuxer think the file is broken
and the re-sync will fail.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
For http, this avoids spurious warnings about failed requests (e.g.
HTTP error 416 Requested Range Not Satisfiable), if the last packet
is truncated and the size read is bogus.
Signed-off-by: Martin Storsjö <martin@martin.st>
When loading a truncated flv file, it would previously try to do a seek to
the end of every packet read. For some input protocols (such as http), such
repeated seek attempts are cripple the reading performance.
Signed-off-by: Martin Storsjö <martin@martin.st>
The current muxer behaviour is to create streams in read_header() based
on the audio/video presence flags, but fill in the stream parameters
later when we actually get some packets for them. This is rather shady,
since other demuxers set the stream parameters immediately when the
stream is created and do not touch the stream codec context after that.
Change the flv demuxer to behave in the same way as other similar
demuxers -- create the streams only when we get a packet for them.
This error was produced by rtmproto.c, it is possibly such streams
where dumped, this commit is needed to support them
Fixes: z0e.flv
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
In case of resync, always free the packet, but retry only if the resync
did not get to the end of the file. Otherwise, there is a memory leak when the
last packet in the file is corrupted.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Such data streams (which then contain no other packets except the faulty one)
confuse some user applications, like VLC
Works around vlcticket 12389
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
from the flv spec, the flvtag define the tagtype as one byte,
the spec desc is:
Reserved UB[2] Reserved for FMS, should be 0
Filter UB[1] Indicates if packets are filtered.
0 = No pre-processing required.
1 = Pre-processing (such as decryption) of the packet is
required before it can be rendered.
Shall be 0 in unencrypted files, and 1 for encrypted
tags.
See Annex F. FLV Encryption for the use of filters.
TagType UB[5] Type of contents in this tag. The following types are
defined:
8 = audio
9 = video
18 = script data
Signed-off-by: Steven Liu <qi.liu@chinacache.com>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
Currently, only onMetaData is used, but some providers (wrongly)
put metadata into onCuePoint events, and it's still nice to be
able to use that data.
onCuePoint events also present metadata slightly differently than
onMetaData events: all metadata is found inside an object called
"parameters". In order to extract this metadata, it's easiest to
recurse through the object tree and pull out anything found in
child objects and put it in the top-level metadata.
Reference: http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/2/help.html?content=00001404.html
Signed-off-by: Anton Khirnov <anton@khirnov.net>