The resolution is in the packets, so decoding must happen.
Since most other formats do not set the dimension, make it
a special case for PGS. If other codecs were to have the
same requirement, using a CODEC_CAP would be preferred.
This also changes behavior as the descriptor table is more complete than
the switch/case it replaces. As well as considering all non video as
intra only
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
These are normally initialized to AV_NOPTS_VALUE at the start
of avformat_find_stream_info, but if a new stream is found while
this function is running (e.g. like in mpegts), the newly added
AVStreams didn't have these values properly initalized, leading
to avformat_find_stream_info terminating too soon (when the
first timestamps are far from 0).
Signed-off-by: Martin Storsjö <martin@martin.st>
This adds a function to retrieve the number of entries in a
dictionary and updates the places directly accessing what should
be an opaque struct to use this new function instead.
Signed-off-by: Mans Rullgard <mans@mansr.com>
This is limited to the chars that arent filtered by av_log() already
we might filter more aggressively if theres some case where this becomes
needed.
Fixes Ticket1181
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
At this place, the normal way of initializing a struct works
fine, there's no need for a struct literal.
Signed-off-by: Martin Storsjö <martin@martin.st>
According to its description, it is supposed to be the LCM of all the
frame durations. The usability of such a thing is vanishingly small,
especially since we cannot determine it with any amount of reliability.
Therefore get rid of it after the next bump.
Replace it with the average framerate where it makes sense.
FATE results for the wtv and xmv demux tests change. In the wtv case
this is caused by the file being corrupted (or possibly badly cut) and
containing invalid timestamps. This results in lavf estimating the
framerate wrong and making up wrong frame durations.
In the xmv case the file contains pts jumps, so again the estimated
framerate is far from anything sane and lavf again makes up different
frame durations.
In some other tests lavf starts making up frame durations from different
frame.
AVPacket.duration is mostly made up and thus completely useless, this is
especially true for video streams.
Therefore use dts difference for framerate estimation and
the max_analyze_duration check.
The asyncts test now needs -analyzeduration, because the default is 5
seconds and the audio stream in the sample appears at ~10 seconds.
Useful in cases where a significant analyzeduration is
still needed, while minimizing buffering before output.
An example is processing low-latency streams where all
media types won't necessarily come in if the
analyzeduration is small.
Additional changes by Josh Allmann <joshua.allmann@gmail.com>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
By moving it to a later point relative and unknown timestamps
are more likely to have been corrected
similar patch reviewed-by: Reimar Döffinger <Reimar.Doeffinger@gmx.de>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
Conflicts:
libavformat/utils.c
If skip_samples is set and timestamps are synthesized using durations,
make them start at -skip_samples (rescaled) instead of 0,
so that the timestamp of the first undiscarded sample is 0.
This adds the minimum delay needed with the current decoder to
recognize the reorder buffer size for the reference bitstreams.
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
To ensure the full range of values is still used, also adjust all uses of this function to loop from 0
instead of 1. This way only 60.00 is added and nothing lost.
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>