ELS and ePIC decoder courtesy of Maxim Poliakovski,
cleanup and integration by Diego Biurrun.
Signed-off-by: Diego Biurrun <diego@biurrun.de>
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
The old one is the result of the reverse engineering and guesswork.
The new one has been written following the now-available specification.
This work is part of Outreach Program for Women Summer 2014 activities
for the Libav project.
The fate references had to be changed because the old demuxer truncates
the last frame in some cases, the new one handles it properly.
The seek-test reference is changed because seeking works differently
in the new demuxer. When seeking, the packet is not read from the stream
directly, but it is rather constructed by the demuxer. That is why
position is -1 now in the reference.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Bump the minimum libvpx version to 1.3.0 and rework the configure logic
to fail only if no decoders and encoders are found.
Based on the original patch from Vittorio.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
The option is enabled by default, but can be disabled.
If this is enabled, such side data isn't copied into the output stream
(except when doing stream copy).
Signed-off-by: Martin Storsjö <martin@martin.st>
Compared to existing, common opensource H264 encoders, this can be
useful since it has got a different license (BSD instead of GPL).
Performance- and qualitywise it is comparable to x264 in ultrafast
mode.
Hooking it up as an encoder in libavcodec also simplifies comparing
it against other common encoders.
This requires OpenH264 1.3 or newer. Since the OpenH264 API and ABI
changes frequently, only releases are supported.
To take advantage of the OpenH264 patent offer, the OpenH264 library
must not be redistributed, but downloaded at runtime at the end-user's
system.
Signed-off-by: Martin Storsjö <martin@martin.st>
Since this structurally is quite different from normal RTP
(multiple streams are muxed into one single mpegts stream,
which is packetized into one single RTP session), it is kept
as a separate muxer.
Since this structurally also behaves differently than normal
RTP, all of the other muxers that do chained RTP muxing
(rtsp, sap, mp4) would need to be updated similarly to handle
this - in particular, creating one single rtp_mpegts muxer
for the whole presentation instead of one rtp muxer per stream.
Signed-off-by: Martin Storsjö <martin@martin.st>
The packetizer only supports splitting at GOB headers - if
such aren't available frequently enough, it splits at any
random byte offset (not at a macroblock boundary either, which
would be allowed by the spec) and sends a payload header pretend
that it starts with a GOB header.
As long as a receiver doesn't try to handle such cases cleverly
but just drops broken frames, this shouldn't matter too much
in practice.
Signed-off-by: Martin Storsjö <martin@martin.st>
This is mostly to serve as a reference example on how to segment
the output from the mp4 muxer, capable of writing the segment
list in four different ways:
- SegmentTemplate with SegmentTimeline
- SegmentTemplate with implicit segments
- SegmentList with individual files
- SegmentList with one single file per track, and byte ranges
The muxer is able to serve live content (with optional windowing)
or create a static segmented MPD.
In advanced cases, users will probably want to do the segmenting
in their own application code.
Signed-off-by: Martin Storsjö <martin@martin.st>
A flag "dash" is added, which enables the necessary flags for
creating DASH compatible fragments.
When this is enabled, one sidx atom is written for each track
before every moof atom.
Signed-off-by: Martin Storsjö <martin@martin.st>
Convert the Matroska stereo format to the Stereo3D format, and add a
Stereo3D side data to the stream.
Bump the doctype version supported.
Bug-Id: 728 / https://bugs.debian.org/757185
It has not been properly maintained for years and there is little hope
of that changing in the future.
It appears simpler to write a new replacement from scratch than
unbreaking it.
Add AV_PKT_DATA_DISPLAYMATRIX and AV_FRAME_DATA_DISPLAYMATRIX as stream and
frame side data (respectively) to describe a display transformation matrix
for linear transformation operations on the decoded video.
Add functions to easily extract a rotation angle from a matrix and
conversely to setup a matrix for a given rotation angle.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Initial implementation by Andrew D'Addesio <modchipv12@gmail.com> during
GSoC 2012.
Completion by Anton Khirnov <anton@khirnov.net>, sponsored by the
Mozilla Corporation.
Further contributions by:
Christophe Gisquet <christophe.gisquet@gmail.com>
Janne Grunau <janne-libav@jannau.net>
Luca Barbato <lu_zero@gentoo.org>