Currently, AVStream contains an embedded AVCodecContext instance, which
is used by demuxers to export stream parameters to the caller and by
muxers to receive stream parameters from the caller. It is also used
internally as the codec context that is passed to parsers.
In addition, it is also widely used by the callers as the decoding (when
demuxer) or encoding (when muxing) context, though this has been
officially discouraged since Libav 11.
There are multiple important problems with this approach:
- the fields in AVCodecContext are in general one of
* stream parameters
* codec options
* codec state
However, it's not clear which ones are which. It is consequently
unclear which fields are a demuxer allowed to set or a muxer allowed to
read. This leads to erratic behaviour depending on whether decoding or
encoding is being performed or not (and whether it uses the AVStream
embedded codec context).
- various synchronization issues arising from the fact that the same
context is used by several different APIs (muxers/demuxers,
parsers, bitstream filters and encoders/decoders) simultaneously, with
there being no clear rules for who can modify what and the different
processes being typically delayed with respect to each other.
- avformat_find_stream_info() making it necessary to support opening
and closing a single codec context multiple times, thus
complicating the semantics of freeing various allocated objects in the
codec context.
Those problems are resolved by replacing the AVStream embedded codec
context with a newly added AVCodecParameters instance, which stores only
the stream parameters exported by the demuxers or read by the muxers.
Since 596e5d4783, this is not necessary anymore. It also allows to
actually disable the flushing, improving write performance (but
possibly giving worse latency in real-time streaming).
Signed-off-by: Martin Storsjö <martin@martin.st>
Otherwise ffmpeg -formats claims that we only support demuxing
of that format.
To keep compatibility the struct could be duplicated instead,
but this seems almost like overkill for such a rare format.
Signed-off-by: Reimar Döffinger <Reimar.Doeffinger@gmx.de>
According to its description, it is supposed to be the LCM of all the
frame durations. The usability of such a thing is vanishingly small,
especially since we cannot determine it with any amount of reliability.
Therefore get rid of it after the next bump.
Replace it with the average framerate where it makes sense.
FATE results for the wtv and xmv demux tests change. In the wtv case
this is caused by the file being corrupted (or possibly badly cut) and
containing invalid timestamps. This results in lavf estimating the
framerate wrong and making up wrong frame durations.
In the xmv case the file contains pts jumps, so again the estimated
framerate is far from anything sane and lavf again makes up different
frame durations.
In some other tests lavf starts making up frame durations from different
frame.
In the name of consistency:
put_byte -> avio_w8
put_<type> -> avio_w<type>
put_buffer -> avio_write
put_nbyte will be made private
put_tag will be merged with avio_put_str
Signed-off-by: Ronald S. Bultje <rsbultje@gmail.com>
(cherry picked from commit 77eb5504d3)
In the name of consistency:
put_byte -> avio_w8
put_<type> -> avio_w<type>
put_buffer -> avio_write
put_nbyte will be made private
put_tag will be merged with avio_put_str
Signed-off-by: Ronald S. Bultje <rsbultje@gmail.com>
VC1 test container always uses time-base 1 ms, so we must convert
from whatever time-base the application gave us to that, otherwise
the video will play at ridiculous speeds.
It would be possible to signal that a container supports only one
time-base and have code in a layer above do the conversion, but
for a single format this seems over-engineered.
This also lists the objects from those two libraries as internal (by adding
the ff_ prefix) so that they can then be hidden via linker scripts.
(cherry picked from commit c6610a216e)