Currently, AVStream contains an embedded AVCodecContext instance, which
is used by demuxers to export stream parameters to the caller and by
muxers to receive stream parameters from the caller. It is also used
internally as the codec context that is passed to parsers.
In addition, it is also widely used by the callers as the decoding (when
demuxer) or encoding (when muxing) context, though this has been
officially discouraged since Libav 11.
There are multiple important problems with this approach:
- the fields in AVCodecContext are in general one of
* stream parameters
* codec options
* codec state
However, it's not clear which ones are which. It is consequently
unclear which fields are a demuxer allowed to set or a muxer allowed to
read. This leads to erratic behaviour depending on whether decoding or
encoding is being performed or not (and whether it uses the AVStream
embedded codec context).
- various synchronization issues arising from the fact that the same
context is used by several different APIs (muxers/demuxers,
parsers, bitstream filters and encoders/decoders) simultaneously, with
there being no clear rules for who can modify what and the different
processes being typically delayed with respect to each other.
- avformat_find_stream_info() making it necessary to support opening
and closing a single codec context multiple times, thus
complicating the semantics of freeing various allocated objects in the
codec context.
Those problems are resolved by replacing the AVStream embedded codec
context with a newly added AVCodecParameters instance, which stores only
the stream parameters exported by the demuxers or read by the muxers.
At least one FATE sample contains such chunks and happens to work simply
by accident (due to find_stream_info() swallowing the error).
CC: libav-stable@libav.org
According to its description, it is supposed to be the LCM of all the
frame durations. The usability of such a thing is vanishingly small,
especially since we cannot determine it with any amount of reliability.
Therefore get rid of it after the next bump.
Replace it with the average framerate where it makes sense.
FATE results for the wtv and xmv demux tests change. In the wtv case
this is caused by the file being corrupted (or possibly badly cut) and
containing invalid timestamps. This results in lavf estimating the
framerate wrong and making up wrong frame durations.
In the xmv case the file contains pts jumps, so again the estimated
framerate is far from anything sane and lavf again makes up different
frame durations.
In some other tests lavf starts making up frame durations from different
frame.
The time base is 1 / sample_rate, not 90000.
Several more codecs encode the sample count in the first 4 bytes of the
chunk, so we set the durations accordingly. Also, we can set start_time and
packet duration instead of keeping track of the sample count in the demuxer.
The container has no timestamps and the framerate isn't stored in the
data either.
The decoder sets codec timebase to experimentally found value 1/15. Do
the same for the demuxer too, it should at least be better than the
default 1/90000.
In the name of consistency:
get_byte -> avio_r8
get_<type> -> avio_r<type>
get_buffer -> avio_read
get_partial_buffer will be made private later
get_strz is left out becase I want to change it later to return
something useful.
Signed-off-by: Ronald S. Bultje <rsbultje@gmail.com>
Passing an explicit filename to this command is only necessary if the
documentation in the @file block refers to a file different from the
one the block resides in.
Originally committed as revision 22921 to svn://svn.ffmpeg.org/ffmpeg/trunk