This allows setting/overriding e.g. the bitrate parameter, which
is required for the smoothstreaming muxer. Normally, the bitrate
is set by the demuxer in these cases, but not all demuxers can
provide it. This allows stream copy of data to the smoothstreaming
muxer from such inputs.
Signed-off-by: Martin Storsjö <martin@martin.st>
Some systems require sys/time.h being explicitly included before
sys/resource.h. The configure check already does this.
Signed-off-by: Mans Rullgard <mans@mansr.com>
Error out on init if a codec with CODEC_CAP_EXPERIMENTAL is requested
and strict_std_compliance is not FF_COMPLIANCE_EXPERIMENTAL.
Move the check from avconv to avcodec_open2() and return
AVERROR_EXPERIMENTAL accordingly.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
It has not worked for anything other than fringe codecs (asv1/2, mdec,
mjpeg[b]) since about 2003 and nobody ever noticed or complained. This
sufficiently proves that there are no users of this option who have a
clue of what they are doing, so it is completely useless.
Before this commit, poll_filters() reads all frames available on each
lavfi output. This does not work for lavfi sources that produce
an unlimited number of frames, e.g. color and similar.
With this commit, poll_filters() reads from output with the lowest
timestamp and returns to wait for more input if no frames are available
on it.
According to its description, it is supposed to be the LCM of all the
frame durations. The usability of such a thing is vanishingly small,
especially since we cannot determine it with any amount of reliability.
Therefore get rid of it after the next bump.
Replace it with the average framerate where it makes sense.
FATE results for the wtv and xmv demux tests change. In the wtv case
this is caused by the file being corrupted (or possibly badly cut) and
containing invalid timestamps. This results in lavf estimating the
framerate wrong and making up wrong frame durations.
In the xmv case the file contains pts jumps, so again the estimated
framerate is far from anything sane and lavf again makes up different
frame durations.
In some other tests lavf starts making up frame durations from different
frame.
If the output frame size is smaller than the input sample rate,
and the input stream time base corresponds exactly to the input
frame size (getting input packet timestamps like 0, 1, 2, 3, 4 etc),
the output timestamps from the filter will be like
0, 1, 2, 3, 4, 4, 5 ..., leadning to non-monotone timestamps later.
A concrete example is input mp3 data having frame sizes of 1152
samples, transcoded to aac with 1024 sample frames.
By setting the audio filter time base to the sample rate, we will
get sensible timestamps for all output packets, regardless of
the ratio between the input and output frame sizes.
Signed-off-by: Martin Storsjö <martin@martin.st>
This allows passing the right options to encoders when there's more
than one encoder for a certain codec id.
Signed-off-by: Martin Storsjö <martin@martin.st>