Since this structurally is quite different from normal RTP
(multiple streams are muxed into one single mpegts stream,
which is packetized into one single RTP session), it is kept
as a separate muxer.
Since this structurally also behaves differently than normal
RTP, all of the other muxers that do chained RTP muxing
(rtsp, sap, mp4) would need to be updated similarly to handle
this - in particular, creating one single rtp_mpegts muxer
for the whole presentation instead of one rtp muxer per stream.
Signed-off-by: Martin Storsjö <martin@martin.st>
The packetizer only supports splitting at GOB headers - if
such aren't available frequently enough, it splits at any
random byte offset (not at a macroblock boundary either, which
would be allowed by the spec) and sends a payload header pretend
that it starts with a GOB header.
As long as a receiver doesn't try to handle such cases cleverly
but just drops broken frames, this shouldn't matter too much
in practice.
Signed-off-by: Martin Storsjö <martin@martin.st>
This was suggested by cbsrobot, ubitux and koda
There are files with huge amounts of XMP data, which would otherwise
be displayed in the terminal output of FFmpeg
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
The Extensible Metadata Platform tag can contain various kind of data
which are not strictly related to the video file, such as history of edits
and saves from the project file. So display XMP metadata only when the
user explicitly requires it.
Based on a patch by Marek Fort <marek.fort@chyronhego.com>.
ffmenc will store recommended encoder configuration if present.
This will allow the user to base on local defaults and
apply only explicitly set options.
If recommended encoder configuration is not present, then
non-default context's options are stored.
Signed-off-by: Lukasz Marek <lukasz.m.luki2@gmail.com>
This is mostly to serve as a reference example on how to segment
the output from the mp4 muxer, capable of writing the segment
list in four different ways:
- SegmentTemplate with SegmentTimeline
- SegmentTemplate with implicit segments
- SegmentList with individual files
- SegmentList with one single file per track, and byte ranges
The muxer is able to serve live content (with optional windowing)
or create a static segmented MPD.
In advanced cases, users will probably want to do the segmenting
in their own application code.
Signed-off-by: Martin Storsjö <martin@martin.st>
A flag "dash" is added, which enables the necessary flags for
creating DASH compatible fragments.
When this is enabled, one sidx atom is written for each track
before every moof atom.
Signed-off-by: Martin Storsjö <martin@martin.st>
Previously we wrote decoding timestamps here, while the specs
say it should be presentation timestamps.
Signed-off-by: Martin Storsjö <martin@martin.st>
This is the same logic as is invoked on AVFMT_TS_NEGATIVE,
but which can be enabled manually, or can be enabled
in muxers which only need it in certain conditions.
Also allow using the same mechanism to force streams to start
at 0.
Signed-off-by: Martin Storsjö <martin@martin.st>
Similarly to the omit_tfhd_offset flag added in e7bf085b, this
avoids writing absolute byte positions to the file, making them
more easily streamable.
This is a new feature from 14496-12:2012, so application support
isn't necessarily too widespread yet (support for it in libav was
added in 20f95f21f in July 2014).
Signed-off-by: Martin Storsjö <martin@martin.st>
The -hls_allow_cache parameter enables explicitly setting the
EXT-X-ALLOW-CACHE tag in the manifest file. That tag indicates
whether the client MAY or MUST NOT cache downloaded media
segments for later replay.
Valid values are 1 (=YES) or 0 (=NO) and the EXT-X-ALLOW-CACHE
will not show in the manifest for other values (or if
-hls_allow_cache is not used.
Signed-off-by: Martin Storsjö <martin@martin.st>
After this the order from the original file is stored through readorder
when doing ffmpeg -i input.ass -c copy output.mkv.
And now that the ASS muxer honors the ReadOrder, extracting the ass back
(without transcoding) restores the original order.
The only flags, for now, indicate if metadata was updated and are set after each call to
av_read_frame(). This comes with the caveat that, on stream start, it might not be set properly
as packets might be buffered in AVFormatContext.packet_buffer before being given to the user
in av_read_frame().
Signed-off-by: Anton Khirnov <anton@khirnov.net>