Currently, AVStream contains an embedded AVCodecContext instance, which
is used by demuxers to export stream parameters to the caller and by
muxers to receive stream parameters from the caller. It is also used
internally as the codec context that is passed to parsers.
In addition, it is also widely used by the callers as the decoding (when
demuxer) or encoding (when muxing) context, though this has been
officially discouraged since Libav 11.
There are multiple important problems with this approach:
- the fields in AVCodecContext are in general one of
* stream parameters
* codec options
* codec state
However, it's not clear which ones are which. It is consequently
unclear which fields are a demuxer allowed to set or a muxer allowed to
read. This leads to erratic behaviour depending on whether decoding or
encoding is being performed or not (and whether it uses the AVStream
embedded codec context).
- various synchronization issues arising from the fact that the same
context is used by several different APIs (muxers/demuxers,
parsers, bitstream filters and encoders/decoders) simultaneously, with
there being no clear rules for who can modify what and the different
processes being typically delayed with respect to each other.
- avformat_find_stream_info() making it necessary to support opening
and closing a single codec context multiple times, thus
complicating the semantics of freeing various allocated objects in the
codec context.
Those problems are resolved by replacing the AVStream embedded codec
context with a newly added AVCodecParameters instance, which stores only
the stream parameters exported by the demuxers or read by the muxers.
The packetizer only supports splitting at GOB headers - if
such aren't available frequently enough, it splits at any
random byte offset (not at a macroblock boundary either, which
would be allowed by the spec) and sends a payload header pretend
that it starts with a GOB header.
As long as a receiver doesn't try to handle such cases cleverly
but just drops broken frames, this shouldn't matter too much
in practice.
Signed-off-by: Martin Storsjö <martin@martin.st>
The RFC spec draft only specifies the "H265" name - there is no
specification saying how to interpret "HEVC" (if such a packet
format is specified it could be an entirely different format).
Since this is a very new standard (still a draft), there is little
need for compatibility with existing, broken implementations. Therefore
remove the extra alias, to avoid the risk of encouraging incorrect
usage.
Intentionally keeping the ff_hevc_dynamic_handler name for the
handler, to use "hevc" consistently as name for the codec instead
of "h265" within the library internals as long as there only is one
single variant in actual use.
Signed-off-by: Martin Storsjö <martin@martin.st>
H263 in RTP can be packetized in two formats (RFC 2190, RFC
2429/4629). The former normally uses the static payload type 34,
while the latter normally uses dynamic payload types with the
SDP format names H263-1998 or H263-2000.
Look for packets that don't look like proper RFC 2190 packets and
switch to depacketizing them according to the new format if they
match some heuristic criteria.
Signed-off-by: Martin Storsjö <martin@martin.st>
This is different from the "modern" RTP payload formats for H263
as defined by RFC 4629, 2429 and 3555. According to the newer RFCs,
this old one is to be considered deprecated and only be used for
interoperating with legacy systems.
Signed-off-by: Martin Storsjö <martin@martin.st>
This requires using a separate init function, since there
isn't necessarily any fmtp lines for this codec, so
parse_sdp_a_line won't be called. Incorporating it with the
alloc function wouldn't do either, since it is called before
the full rtpmap line is parsed (where the sample rate is
extracted).
Signed-off-by: Martin Storsjö <martin@martin.st>