Since the VDPAU pixel format does not distinguish between different
VDPAU video surface chroma types, we need another way to pass this
data to the application.
Originally VDPAU in libavcodec only supported decoding to 8-bits YUV
with 4:2:0 chroma sampling. Correspondingly, applications assumed that
libavcodec expected VDP_CHROMA_TYPE_420 video surfaces for output.
However some of the new HEVC profiles proposed for addition to VDPAU
would require different depth and/or sampling:
http://lists.freedesktop.org/archives/vdpau/2014-July/000167.html
...as would lossless AVC profiles:
http://lists.freedesktop.org/archives/vdpau/2014-November/000241.html
To preserve backward binary compatibility with existing applications,
a new av_vdpau_bind_context() flag is introduced in a further change.
Signed-off-by: Rémi Denis-Courmont <remi@remlab.net>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
This can be used by the application to signal its ability to cope with
video surface of types other than 8-bits YUV 4:2:0.
Signed-off-by: Rémi Denis-Courmont <remi@remlab.net>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
This carries the pixel format that would be used if it were not for
hardware acceleration. This is equal to AVCodecContext.pix_fmt if
hardware acceleration is not in use.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
This is to avoid proliferation of similar tables in following changes.
Signed-off-by: Rémi Denis-Courmont <remi@remlab.net>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
The MoveFileExA is available in the headers regardless which API
subset is targeted, but it is missing in the Windows Phone link
libraries. When targeting Windows Store apps, the function is
available both in the headers and in the link libraries, and thus
there is no indication for the build system that this function
should be avoided - such an indication is only given by the
Windows App Certification Kit, which forbids using the MoveFileExA
function.
Therefore check the WINAPI_FAMILY defines instead, to figure out
which API subset is targeted.
Signed-off-by: Martin Storsjö <martin@martin.st>
Since the mpegts muxer now can handle being called with a NULL
AVIOContext, we don't need to try to allocate one before calling
write_trailer.
Signed-off-by: Martin Storsjö <martin@martin.st>
If opening and closing dynamic buffers as AVIOContext, we may
not have any AVIOContext available when wanting to close and
deallocate the muxer. Allow calling write_trailer despite this.
Signed-off-by: Martin Storsjö <martin@martin.st>
Fixes invalid writes when there are more blocks in a run than total
remaining blocks.
CC: libav-stable@libav.org
Bug-ID: CVE-2014-8548
Found-by: Mateusz "j00ru" Jurczyk and Gynvael Coldwind
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Fixes invalid writes with very small image heights.
CC: libav-stable@libav.org
Bug-ID: CVE-2014-8547
Found-by: Mateusz "j00ru" Jurczyk and Gynvael Coldwind
Signed-off-by: Anton Khirnov <anton@khirnov.net>
The frame size must be set by the caller and each dimension must be a
multiple of 2.
CC: libav-stable@libav.org
Bug-ID: CVE-2014-8543
Found-by: Mateusz "j00ru" Jurczyk and Gynvael Coldwind
The frame size must be set by the caller and each dimension must be a
multiple of 8.
CC: libav-stable@libav.org
Bug-ID: CVE-2014-8542
Found-by: Mateusz "j00ru" Jurczyk and Gynvael Coldwind
Fixes possible invalid memory access.
Based on code by Michael Niedermayer <michaelni@gmx.at>
CC: libav-stable@libav.org
Bug-ID: CVE-2014-8541
Found-by: Mateusz "j00ru" Jurczyk and Gynvael Coldwind
This is the order that the caller uses in the rest of the file. The
same operation is applied to both parameters, so this change is only
done for consistency, it doesn't change the actual behaviour.
Bug-Id: CID 732285 / CID 732286
The pts and the corresponding duration is written in sidx
atoms, thus make sure these match up correctly.
Signed-off-by: Martin Storsjö <martin@martin.st>
Since this structurally is quite different from normal RTP
(multiple streams are muxed into one single mpegts stream,
which is packetized into one single RTP session), it is kept
as a separate muxer.
Since this structurally also behaves differently than normal
RTP, all of the other muxers that do chained RTP muxing
(rtsp, sap, mp4) would need to be updated similarly to handle
this - in particular, creating one single rtp_mpegts muxer
for the whole presentation instead of one rtp muxer per stream.
Signed-off-by: Martin Storsjö <martin@martin.st>
The packetizer only supports splitting at GOB headers - if
such aren't available frequently enough, it splits at any
random byte offset (not at a macroblock boundary either, which
would be allowed by the spec) and sends a payload header pretend
that it starts with a GOB header.
As long as a receiver doesn't try to handle such cases cleverly
but just drops broken frames, this shouldn't matter too much
in practice.
Signed-off-by: Martin Storsjö <martin@martin.st>
Instead explicitly jump to the default case in the cases where
it is wanted, and avoid fallthrough between different codecs,
which could easily introduce bugs if people editing the code
aren't careful.
Signed-off-by: Martin Storsjö <martin@martin.st>
ff_mpv_common_init sets s->context_initialized.
This fixes decoding of h261 in the cases where the demuxer
hasn't already set the frame size.
CC: libav-stable@libav.org
Signed-off-by: Martin Storsjö <martin@martin.st>
This avoids trying to do sliced encoding, even if a slice/packet
size is requested (via the -ps option or the rtp_payload_size
field), since the encoder currently doesn't support it (or at least
our decoder can't decode it, even if the h261_encode_gob_header
function is hooked up to be called from the slicing part in
mpegvideo_enc.c).
Signed-off-by: Martin Storsjö <martin@martin.st>
If we throw away the buffered incomplete frame, make sure to also
throw away the buffered bits of an incomplete byte at the same
time.
CC: libav-stable@libav.org
Signed-off-by: Martin Storsjö <martin@martin.st>
In particular, when packetizing mpegts into rtp, the input packet
timestamp may come from more than one stream, which could cause
multiple packets be written with the same timestamp.
Signed-off-by: Martin Storsjö <martin@martin.st>
This is the same adjustment that the mp4 muxer does to the start
timestamp of fragments, since the timestamp of a sample in an mp4
file is implicit from the sum of earlier sample durations.
This avoids gaps in the timeline (which can stop dash.js from
playing it back), and makes sure the timestamp on the segmenter
level matches what the mp4 muxer actually writes into the segments.
This is only an issue if the AVPacket duration of the last
packet of a segment doesn't point to the actual start timestamp
of the next packet (the first in the next segment).
Signed-off-by: Martin Storsjö <martin@martin.st>