Now force_codec_ids supports AVMEDIA_TYPE_DATA and
avformat_query_codec accepts data codecs as well in addition to video,
audio and subtitle tracks.
Signed-off-by: Erkki Seppälä <erkki.seppala.ext@nokia.com>
Signed-off-by: OZOPlayer <OZOPL@nokia.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Restore original timestamps in write_packet() if the
actual write operation was not successfull. This allows
to pass the same packet to nonblocking muxer repeatedly
without corrupting the timestamps.
Signed-off-by: Jan Sebechlebsky <sebechlebskyjan@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes out of array read
Fixes: 049fdf78565f1ce5665df236d90f8657/asan_heap-oob_10a5a97_1026_42f9d4855547329560f385768de2f3fb.wtv
Found-by: Mateusz "j00ru" Jurczyk and Gynvael Coldwind
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
When ffmpeg exit by exception, start a new ffmpeg will
cover the old segment list, add this flag can continue
append the new segments into old hls segment list
Signed-off-by: LiuQi <liuqi@gosun.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This change relaxes the whitelist on reading color metadata in MOV/BMFF
containers. The whitelist on writing values is still in place.
As a consequence it also fixes an apparent bug in reading 'nclc' values.
The 'nclc' spec [1] is in harmony with ISO 23001-8 for the values it
lists, but the code getting removed was remapping 5->6 and 6->7 for
primaries, which is incorrect, and was remapping 6->5 for color matrix
("colorspace" in the code), which is equivalent but an unnecessary
inconsistency. This logic error doesn't appear in movenc.
Removing the whitelist allows proper conversion when the source codec
relies on the container for proper signaling of newer codepoints, such
as DNxHR and VP9. If converting to a codec or container that has updated
its spec to include the new codepoints, the metadata will be preserved.
If going back to MOV/BMFF, the output whitelist will still kick in, so
this won't result in out-of-spec files being created.
[1] https://developer.apple.com/library/mac/technotes/tn2162/_index.html
Signed-off-by: Steven Robertson <steven@strobe.cc>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes out of array read
Fixes: 13262c363a28da8d6bdcc472aed6e9dc/asan_heap-oob_cfb5e2_3733_31cf3fcc783295c34222eb070a784f84.3gp
Found-by: Mateusz "j00ru" Jurczyk and Gynvael Coldwind
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Instead of silently ignoring the content_type option in listen mode,
apply its value to the provided "Content-Type:" header.
Signed-off-by: Moritz Barsnick <barsnick@gmx.net>
Reviewed-by: Nicolas George <george@nsup.org>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Instead of silently ignoring the headers option in listen mode, use
the provided headers.
Signed-off-by: Moritz Barsnick <barsnick@gmx.net>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The values don't need to be hardcoded since the correct values are
returned by avs_bits_per_pixel.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Enable the MXF muxer to mux baseline H.264 video streams.
Signed-off-by: Matthias Hunstock <atze@fem.tu-ilmenau.de>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Also set a default_whitelist for mmsh and ffrtmphttp.
Signed-off-by: Andreas Cadhalpun <Andreas.Cadhalpun@googlemail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This will be used to allow writing file sequences using the tee output onto
multiple places in parallel
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
If negative pts are possible for some codecs in ogg then the code needs to be
changed to use signed values.
Found-by: Thomas Guilbert <tguilbert@google.com>
Fixes: clusterfuzz_usan-2016-08-02
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Set the stream_id to 0xbd (private_stream_id_1). Tools seem to assume
that value, and this is consistent with MPEG TS specification (ITU-T
H.222.0 section 2.12.3).
Rev #2: Fixes doubled header writing, checked FATE running without errors
Rev #3: Fixed coding style
This commit addresses the following scenario:
we are using ffmpeg to transcode or remux mkv (or something else) to mkv. The result is being streamed on-the-fly to an HTML5 client (streaming starts while ffmpeg is still running). The problem here is that the client is unable to detect the duration because the duration is only written to the mkv at the end of the transcoding/remoxing process. In matroskaenc.c, the duration is only written during mkv_write_trailer but not during mkv_write_header.
The approach:
FFMPEG is currently putting quite some effort to estimate the durations of source streams, but in many cases the source stream durations are still left at 0 and these durations are nowhere mapped to or used for output streams. As much as I would have liked to deduct or estimate output durations based on input stream durations - I realized that this is a hard task (as Nicolas already mentioned in a previous conversation). It would involve changes to the duration calculation/estimation/deduction for input streams and propagating these durations to output streams or the output context in a correct way.
So I looked for a simple and small solution with better chances to get accepted. In webmdashenc.c I found that a duration is written during write_header and this duration is taken from the streams' metadata, so I decided for a similar approach.
And here's what it does:
At first it is checking the duration of the AVFormatContext. In typical cases this value is not set, but: It is set in cases where the user has specified a recording_time or an end_time via the -t or -to parameters.
Then it is looking for a DURATION metadata field in the metadata of the output context (AVFormatContext::metadata). This would only exist in case the user has explicitly specified a metadata DURATION value from the command line.
Then it is iterating all streams looking for a "DURATION" metadata (this works unless the option "-map_metadata -1" has been specified) and determines the maximum value.
The precendence is as follows: 1. Use duration of AVFormatContext - 2. Use explicitly specified metadata duration value - 3. Use maximum (mapped) metadata duration over all streams.
To test this:
1. With explicit recording time:
ffmpeg -i file:"src.mkv" -loglevel debug -t 01:38:36.000 -y "dest.mkv"
2. Take duration from metadata specified via command line parameters:
ffmpeg -i file:"src.mkv" -loglevel debug -map_metadata -1 -metadata Duration="01:14:33.00" -y "dest.mkv"
3. Take duration from mapped input metadata:
ffmpeg -i file:"src.mkv" -loglevel debug -y "dest.mkv"
Regression risk:
Very low IMO because it only affects the header while ffmpeg is still running. When ffmpeg completes the process, the duration is rewritten to the header with the usual value (same like without this commit).
Signed-off-by: SoftWorkz <softworkz@hotmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>