This will be used to allow writing file sequences using the tee output onto
multiple places in parallel
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
If negative pts are possible for some codecs in ogg then the code needs to be
changed to use signed values.
Found-by: Thomas Guilbert <tguilbert@google.com>
Fixes: clusterfuzz_usan-2016-08-02
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Set the stream_id to 0xbd (private_stream_id_1). Tools seem to assume
that value, and this is consistent with MPEG TS specification (ITU-T
H.222.0 section 2.12.3).
Rev #2: Fixes doubled header writing, checked FATE running without errors
Rev #3: Fixed coding style
This commit addresses the following scenario:
we are using ffmpeg to transcode or remux mkv (or something else) to mkv. The result is being streamed on-the-fly to an HTML5 client (streaming starts while ffmpeg is still running). The problem here is that the client is unable to detect the duration because the duration is only written to the mkv at the end of the transcoding/remoxing process. In matroskaenc.c, the duration is only written during mkv_write_trailer but not during mkv_write_header.
The approach:
FFMPEG is currently putting quite some effort to estimate the durations of source streams, but in many cases the source stream durations are still left at 0 and these durations are nowhere mapped to or used for output streams. As much as I would have liked to deduct or estimate output durations based on input stream durations - I realized that this is a hard task (as Nicolas already mentioned in a previous conversation). It would involve changes to the duration calculation/estimation/deduction for input streams and propagating these durations to output streams or the output context in a correct way.
So I looked for a simple and small solution with better chances to get accepted. In webmdashenc.c I found that a duration is written during write_header and this duration is taken from the streams' metadata, so I decided for a similar approach.
And here's what it does:
At first it is checking the duration of the AVFormatContext. In typical cases this value is not set, but: It is set in cases where the user has specified a recording_time or an end_time via the -t or -to parameters.
Then it is looking for a DURATION metadata field in the metadata of the output context (AVFormatContext::metadata). This would only exist in case the user has explicitly specified a metadata DURATION value from the command line.
Then it is iterating all streams looking for a "DURATION" metadata (this works unless the option "-map_metadata -1" has been specified) and determines the maximum value.
The precendence is as follows: 1. Use duration of AVFormatContext - 2. Use explicitly specified metadata duration value - 3. Use maximum (mapped) metadata duration over all streams.
To test this:
1. With explicit recording time:
ffmpeg -i file:"src.mkv" -loglevel debug -t 01:38:36.000 -y "dest.mkv"
2. Take duration from metadata specified via command line parameters:
ffmpeg -i file:"src.mkv" -loglevel debug -map_metadata -1 -metadata Duration="01:14:33.00" -y "dest.mkv"
3. Take duration from mapped input metadata:
ffmpeg -i file:"src.mkv" -loglevel debug -y "dest.mkv"
Regression risk:
Very low IMO because it only affects the header while ffmpeg is still running. When ffmpeg completes the process, the duration is rewritten to the header with the usual value (same like without this commit).
Signed-off-by: SoftWorkz <softworkz@hotmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
HLS demuxer calls the subdemuxer avformat_find_stream_info() while
overriding the subdemuxer AVFMTCTX_NOHEADER flag by clearing it.
However, this prevents some streams in some MPEG TS streams from being
detected properly.
Simply removing the clearing of the flag would cause the inner
avformat_find_stream_info() call to take longer in some cases, without
a way to control it.
To fix the issue, do not clear the flag but propagate it to HLS demuxer.
To avoid the above-mentioned mandatory delay, the call to
avformat_find_stream_info() is dropped except in the HLS ID3 timestamped
case. The HLS demuxer user should be calling avformat_find_stream_info()
on the HLS demuxer if it wants to find the stream info.
The main streams are now created dynamically after read_header time if
the subdemuxer uses AVFMTCTX_NOHEADER (mpegts).
Subdemuxer avformat_find_stream_info() is still called for the HLS ID3
timestamped case as the HLS demuxer needs to know the packet durations
to properly interleave ID3 timestamped streams with MPEG TS streams on
sub-segment level.
Fixes ticket #4930.
Creation of main demuxer streams from subdemuxer streams is moved to
update_streams_from_subdemuxer() which can be called repeatedly.
There should be no functional changes.
Commit 81306fd4bdf ("hls: eliminate ffurl_* usage", merged in d0fc5de3a6)
changed the hls demuxer to use AVIOContext instead of URLContext for its
HTTP requests.
HLS demuxer uses the "offset" option of the http demuxer, requesting
the initial file offset for the I/O (http URLProtocol uses the "Range:"
HTTP header to try to accommodate that).
However, the code in libavformat/aviobuf.c seems to be doing its own
accounting for the current file offset (AVIOContext.pos), with the
assumption that the initial offset is always zero.
HLS demuxer does an explicit seek after open_url to account for cases
where the "offset" was not effective (due to the URL being a local file
or the HTTP server not obeying it), which should be a no-op in case the
file offset is already at that position.
However, since aviobuf.c code thinks the starting offset is 0, this
doesn't work properly.
This breaks retrieval of ranged media segments.
To fix the regression, just drop the seek call from the HLS demuxer when
the HTTP(S) protocol is used.
Commit 9200514ad8 ("lavf: replace AVStream.codec with
AVStream.codecpar") merged in commit 6f69f7a8bf changed
avformat_find_stream_info() to put the extradata it got from
st->parser->parser->split() to st->internal->avctx instead of st->codec
(extradata in st->internal->avctx will be later copied to st->codecpar).
However, in the same function, the "is stream ready?" check was changed
to check for extradata in st->codecpar instead of st->codec, even
though st->codecpar is not yet updated at that point.
Extradata retrieved from split() is therefore not considered anymore,
and avformat_find_stream_info() will therefore needlessly continue
probing in some cases.
Fix that by checking for the extradata at st->internal->avctx where it
is actually put.
It's a small and simple function that can be inlined.
This removes one private symbol and should reduce object dependencies with the next
major bump
Signed-off-by: James Almer <jamrial@gmail.com>
This ensures that AV_NOPTS_VALUE value is handled
correctly.
Reviewed-by: Nicolas George <george@nsup.org>
Signed-off-by: Jan Sebechlebsky <sebechlebskyjan@gmail.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
When seeking a file where codec delay is greater than 0, the timecode
can become negative after offsetting by the codec delay. Failing to cast
to a signed int64 will cause the check against skip_to_timecode to evaluate
true for these negative values. This breaks the "skip_to" seek mechanism.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fix the confusion around the used time base.
Check size returned from avio_size()
Signed-off-by: Jörn Heusipp <osmanx@problemloesungsmaschine.de>
Signed-off-by: Josh de Kock <josh@itanimul.li>
The header was never installed and the function is only used in libavformat
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
Add ff_format_output_open utility function to wrap
io_open callback of AVFormatContext structure.
Signed-off-by: Jan Sebechlebsky <sebechlebskyjan@gmail.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
This will add support for flushing by writing NULL
packet to the tee muxer, which propagates the action
to slave muxers as expected.
Signed-off-by: Jan Sebechlebsky <sebechlebskyjan@gmail.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
Use ff_stream_encode_params_copy() to copy encoding-related
fields (parameters) of stream.
Reviewed-by: Nicolas George <george@nsup.org>
Signed-off-by: Jan Sebechlebsky <sebechlebskyjan@gmail.com>
Signed-off-by: Marton Balint <cus@passwd.hu>