This fixes sending back RTCP RR packets if receiving RTP over
multicast.
If the multicast stream is sent on demand (set up and signalled
via RTSP), the sender might depend on getting RTCP RR packets
knowing that there are listeners, otherwise the stream can be
closed after a certain timeout.
This fixes receiving RTSP streams over multicast on unix, from
certain Axis cameras.
Signed-off-by: Martin Storsjö <martin@martin.st>
When this code was added in 36b532815c, the new code was added
between the existing comment and the existing line of code, making
the old comment seem to refer to the new code. This makes it read
correctly.
Signed-off-by: Martin Storsjö <martin@martin.st>
The ogg decoder wasn't padding the input buffer with the appropriate
FF_INPUT_BUFFER_PADDING_SIZE bytes. Which led to uninitialized reads in
various pieces of parsing code when they thought they had more data than
they actually did.
Signed-off-by: Dale Curtis <dalecurtis@chromium.org>
Signed-off-by: Ronald S. Bultje <rsbultje@gmail.com>
Also, do not keep trying to find and open a decoder in try_decode_frame() if
we already tried and failed once.
Fixes always searching until max_analyze_duration in
avformat_find_stream_info() when demuxing codecs without a decoder.
Also, do not give AVCodecContext.frame_size priority for muxing.
Updated 2 FATE references:
dxa-feeble - adds 1 audio frame that is still within 2 seconds as specified
by -t 2 in the FATE test
wmv8-drm-nodec - durations are not needed. previously they were estimated
using the packet size and average bit rate.
It is unnecessary. Also, for some codecs we're reading more than 1 frame per
packet. Instead we use a private context variable to calculate the bit rate,
stream duration, and packet durations.
Updated FATE seek test, which has slightly different timestamps due to a
more accurate bit rate calculation.
For encoding, frame_size is not a reliable indicator of packet duration.
Also, we don't want to have to force the demuxer to find frame_size for
stream copy to work.
Split off packet parsing into a separate function. Parse full packets at
once and store them in a queue, eliminating the need for tracking
parsing state in AVStream.
The horrible unreadable loop in read_frame_internal() now isn't weirdly
ordered and doesn't contain evil gotos, so it should be much easier to
understand.
compute_pkt_fields() now invents slightly different timestamps for two
raw vc1 tests, due to has_b_frames being set a bit later. They shouldn't
be more wrong (or right) than previous ones.
Make packet buffer a parameter, don't hardcode it to be
AVFormatContext.packet_buffer.
Also move the function higher in the file, since it will be called from
read_frame_internal().
The time base is 1 / sample_rate, not 90000.
Several more codecs encode the sample count in the first 4 bytes of the
chunk, so we set the durations accordingly. Also, we can set start_time and
packet duration instead of keeping track of the sample count in the demuxer.
Fixes timestamp calculation.
The FATE reference is updated because timestamp calculations are now more
accurate. Previous timestamps were based on average bit rate.
This allows it to be used with get_bits without the thread of overreads.
Found-by: Mateusz "j00ru" Jurczyk and Gynvael Coldwind
CC: libav-stable@libav.org
The container has no timestamps and the framerate isn't stored in the
data either.
The decoder sets codec timebase to experimentally found value 1/15. Do
the same for the demuxer too, it should at least be better than the
default 1/90000.