Previously, if neither of the checks for the closesocket function
succeeded, we still kept winsock2.h and networking in general
enabled.
When targeting the WinRT API subset, the winsock2.h header is
available (making the check for it succeed, giving the impression
that winsock is available), but tests that actually try to use
such a function will fail. In this case, disable the winsock2.h
feature and networking in general, as if the winsock2.h header
test would have failed in the first place.
Signed-off-by: Martin Storsjö <martin@martin.st>
The new code is faster and reuses the previous state in case of
multiple calls.
The previous code could easily end up in near-infinite loops,
if the difference between two clock() calls never was larger than
1.
This makes fate-parseutils finish in finite time when run in wine,
if CryptGenRandom isn't available (which e.g. isn't available if
targeting Windows RT/metro).
Patch originally by Michael Niedermayer but with some modifications
by Martin Storsjö.
Signed-off-by: Martin Storsjö <martin@martin.st>
There is no point in delaying the check and it avoids bugs with a
half-initialized context.
Fixes invalid reads.
Found-by: Mateusz "j00ru" Jurczyk and Gynvael Coldwind
CC:libav-stable@libav.org
They end up overwriting past the line end.
Partially based on a patch by Michael Niedermayer <michaelni@gmx.at>
Bug-Id: vlc/9700
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
The decoder currently sets CODEC_FLAG_EMU_EDGE and relies on
get_buffer2() to always provide buffers with linesize == 2 * width.
This is wrong, since we place no such restriction on get_buffer2()
implementations.
Fix this by decoding into internal buffers and copying them to output
frames. Since this is a very obscure decoder, the performance hit should
not be an issue.
The decoder currently sets CODEC_FLAG_EMU_EDGE and relies on
get_buffer2() to always provide buffers with linesize == 2 * width.
This is wrong, since we place no such restriction on get_buffer2()
implementations.
Fix this by decoding into internal buffers and copying them to output
frames. Since this is a very obscure decoder, the performance hit should
not be an issue.
When downmixing 2.1 to 2-channel, if the 2.0 portion is Lt/Rt, sum-difference or dual mono, the actual output will be the same (with the LFE either mixed-in or discarded).
Also, when downmixing an arbitrary layout to 2-channel, if the bitstream contains custom downmix coefficients targeting Lt/Rt, then the output will be Lt/Rt rather than regular Stereo.
New versions of FreeType have moved the location of their API
header(s) and hide the location behind a macro.
Since the location changes between versions and no other way
to know the location exists, this workaround becomes necessary.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
Specifically, when the corresponding input channel exists and its matrix
column is all-zero (which is necessary for zeroing the output), the
matrix column must be removed from the matrix.
This is not done currently, so the mixing code would end up using
uninitialized pointers from stack.
Found-by: Mateusz "j00ru" Jurczyk and Gynvael Coldwind
This should make it easier to catch problems where some of those
pointers are used uninitialized, since reading from NULL should always
crash, while random numbers from stack can turn out to be valid
pointers, so random memory may be silently overwritten.
If it was set before then we can end up trying to decode a slice without
a valid slice header, which can lead to invalid memory access.
Found-by: Mateusz "j00ru" Jurczyk and Gynvael Coldwind
CC:libav-stable@libav.org