Currently, AVStream contains an embedded AVCodecContext instance, which
is used by demuxers to export stream parameters to the caller and by
muxers to receive stream parameters from the caller. It is also used
internally as the codec context that is passed to parsers.
In addition, it is also widely used by the callers as the decoding (when
demuxer) or encoding (when muxing) context, though this has been
officially discouraged since Libav 11.
There are multiple important problems with this approach:
- the fields in AVCodecContext are in general one of
* stream parameters
* codec options
* codec state
However, it's not clear which ones are which. It is consequently
unclear which fields are a demuxer allowed to set or a muxer allowed to
read. This leads to erratic behaviour depending on whether decoding or
encoding is being performed or not (and whether it uses the AVStream
embedded codec context).
- various synchronization issues arising from the fact that the same
context is used by several different APIs (muxers/demuxers,
parsers, bitstream filters and encoders/decoders) simultaneously, with
there being no clear rules for who can modify what and the different
processes being typically delayed with respect to each other.
- avformat_find_stream_info() making it necessary to support opening
and closing a single codec context multiple times, thus
complicating the semantics of freeing various allocated objects in the
codec context.
Those problems are resolved by replacing the AVStream embedded codec
context with a newly added AVCodecParameters instance, which stores only
the stream parameters exported by the demuxers or read by the muxers.
Not sure if this actually happens, but we do the same check when
checking payload_type further above in the function, so it might
be needed.
Signed-off-by: Martin Storsjö <martin@martin.st>
This makes all users of rtpenc_chain (rtsp muxer, sapenc, mov
rtp hinting) work again, broken since 8034130e0.
Signed-off-by: Martin Storsjö <martin@martin.st>
If using a different sample rate or number of channels, use a dynamic
payload type instead, where the parameters are passed in the SDP.
G722 is a special case where the normal rules don't apply.
Signed-off-by: Martin Storsjö <martin@martin.st>
According to newer RFCs, this packetization scheme should only
be used for interfacing with legacy systems.
Implementing this packetization mode properly requires parsing
the full H263 bitstream to find macroblock boundaries (and knowing
their macroblock and gob numbers and motion vector predictors).
This implementation tries to look for GOB headers (which
can be inserted by using -ps <small number>), but if the GOBs
aren't small enough to fit into the MTU, the packetizer blindly
splits packets at any offset and claims it to be a GOB boundary
(by using Mode A from the RFC). While not correct, this seems
to work with some receivers.
Signed-off-by: Martin Storsjö <martin@martin.st>
It was broken in 3b3ea34655
"Remove all uses of deprecated AVOptions API", where any
presence of a payload_type AVOption caused its value to
be returned, even if it wasn't set (and thus had the default
-1 value).
This caused the RTP muxer to be broken.
Signed-off-by: Martin Storsjö <martin@martin.st>
Specifying the payload type is useful when the type number has
already been negotiated before creating the stream, for example
in SIP protocol.
Signed-off-by: Martin Storsjö <martin@martin.st>
Move the identical code in rtp_write_header() and
ff_sdp_write_media() inside ff_rtp_get_payload_type()
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
patch by: Björn Axelsson, bjorn d axelsson a intinor d se
thread: [PATCH] Remove static ByteIOContexts, 06 nov 2007
Originally committed as revision 11071 to svn://svn.ffmpeg.org/ffmpeg/trunk
patch by Ronald S. Bultje, rsbultje gmail com
thread: Re: [FFmpeg-devel] remove int readers
date: Sat, 23 Jun 2007 09:32:12 -0400
Originally committed as revision 9499 to svn://svn.ffmpeg.org/ffmpeg/trunk