This function needs to return false, or data in the additional tables
will be skipped, and the decoder will not be able to decode frames
associated with them.
Store data from each stsd in a separate extradata buffer, keep track of
the stsc index for read and seek operations, switch buffers when the
index differs. Decoder is notified with an AV_PKT_DATA_NEW_EXTRADATA
packet side data.
Since H264 supports this notification, and can be reset midstream, enable
this feature only for multiple avcC's. All other stsd types (such as
hvc1 and hev1) need decoder-side changes, so they are left disabled for
now.
This is implemented only in non-fragmented MOVs.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
This avoids the danger that get_bits.h might get indirectly #included before
BITSTREAM_READER_LE is defined.
Also sort headers into canonical order where appropriate.
Split version files into one line per symbol/directive to allow compatibility
with the Solaris linker without preprocessing and eliminate $ from version file
templates to simplify the postprocessing shell command.
Previously, we required the minimum number of bytes required for
the full box. Don't strictly require the astronomical body and additional
notes fields, but do require an altitude field (which currently isn't
parsed). This matches the initial length check at the start of the function
(which doesn't know about the variable length place field).
Signed-off-by: Martin Storsjö <martin@martin.st>
This was missed in e1eb0fc960, when ff_interleaved_peek was
changed to include const during the evolution of the patch.
Signed-off-by: Martin Storsjö <martin@martin.st>
As long as caller only writes packets using av_interleaved_write_frame
with no manual flushing, this should allow us to always have accurate
durations at the end of fragments, since there should be at least
one queued packet in each stream (except for the stream where the
current packet is being written, but if the muxer itself does the
cutting of fragments, it also has info about the next packet for that
stream).
Signed-off-by: Martin Storsjö <martin@martin.st>
This allows callers with avio write callbacks to get the bytestream
positions that correspond to keyframes, suitable for live streaming.
In the simplest form, a caller could expect that a header is written
to the bytestream during the avformat_write_header, and the data
output to the avio context during e.g. av_write_frame corresponds
exactly to the current packet passed in.
When combined with av_interleaved_write_frame, and with muxers that
do buffering (most muxers that do some sort of fragmenting or
clustering), the mapping from input data to bytestream positions
is nontrivial.
This allows callers to get directly information about what part
of the bytestream is what, without having to resort to assumptions
about the muxer behaviour.
One keyframe/fragment/block can still be split into multiple (if
they are larger than the aviocontext buffer), which would call
the callback with e.g. AVIO_DATA_MARKER_SYNC_POINT, followed by
AVIO_DATA_MARKER_UNKNOWN for the second time it is called with
the following data.
Signed-off-by: Martin Storsjö <martin@martin.st>
When feeding input RTP packets to the depacketizer via custom IO,
it needs to pick the right stream using the payload type for
RTP packets, and using the SSRC for RTCP packets. If the first
packet is an RTCP packet, we don't (currently) know the SSRC
yet and thus can't pick the right RTP depacketizer to handle it.
By parsing the SSRC attribute in the SDP, we can map initial
RTCP packets to the right stream.
Signed-off-by: Martin Storsjö <martin@martin.st>
It doesn't matter what the actual reason for not returning
an AVPacket was - if we didn't return any packet and we have
the next one queued, parse it immediately. (rtp_parse_queued_packet
always consumes a queued packet if one exists, so there's no risk
for infinite loops.)
Signed-off-by: Martin Storsjö <martin@martin.st>
The declarations that this comment referred to were removed
in 2439f2ca8 - there is no unbuffered IO in this header now.
Signed-off-by: Martin Storsjö <martin@martin.st>
We still only support one single layer though, but this allows
receiving streams that have this structure present even for
single layer streams.
Signed-off-by: Martin Storsjö <martin@martin.st>
This codepath isn't quite as bad as it used to sound, if fragments
are cut automatically at video packets.
Signed-off-by: Martin Storsjö <martin@martin.st>