Some http/1.0 implementations, like python's SimpleHTTPServer, can only support one client connection at a time. Making a second request while the first is still connected leads to a deadlock.
This change enables multiple connections for http/1.1 servers only, which need to support keepalive by default and should have no problem with concurrent requests.
Signed-off-by: Aman Gupta <aman@tmm1.net>
Can be used by the api user to figure out what http features the server supports based on the response received.
Signed-off-by: Aman Gupta <aman@tmm1.net>
This makes do_new_request fail early when dealing with a http/1.0 server, avoiding unnecessary "reconnecting" warnings shown to the user.
Signed-off-by: Aman Gupta <aman@tmm1.net>
Fixes compile error when building with network or protocols disabled.
This code would never be reached (because the demuxer fails much earlier on http playlists or segments), so it doesn't matter much what we do here as long as it compiles.
Signed-off-by: Aman Gupta <aman@tmm1.net>
Fixes: runtime error: left shift of negative value -180
Fixes: 4626/clusterfuzz-testcase-minimized-5647837887987712
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes: runtime error: signed integer overflow: 2147483646 + 33554433 cannot be represented in type 'int'
Fixes: 4563/clusterfuzz-testcase-minimized-5438979567517696
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
x264 now supports multibitdepth builds, with a slightly changed API to
request bitdepth during initialization.
Reviewed-by: Ricardo Constantino <wiiaboo@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
This provides a generic way to the API user to deal with files that
either lack this SEI, or which have the SEI only in packets not passed
to the decoder (such as the common case of the SEI being in the very
firsat video packet, but decoding is started somewhere in the middle of
the file). Bugs like 840b41b2a6 make this
somewhat of a necessity.
This intentionally uses the version in the SEI instead, if any is found.
This is just a lot of complicated and confusing code that had no purpose
anymore.
Also, the functions return values were checked only sometimes. Locking
shouldn't fail anyway, so remove the return values. Barely any other
pthread lock calls check the return value (including more important code
that is more likely to fail horribly if locking fails).
It could be argued that it might be helpful in some debugging
situations, or when the user built FFmpeg without thread support against
all good advice.
But there are dummy atomics too, so the atomic check won't help with
ensuring correctness absolutely. You gain very little.
Also, for debugging, you can just raise the ASSERT_LEVEL, and then
libavutil/thread.h will redefine the locking functions to explicitly
check the return values.
It's completely absurd that libavcodec would care about libavformat
locking, but it was there because the lock manager was in libavcodec.
This is more stright forward. Changes ABI, but we don't require ABI
compatibility currently.
Use static mutexes instead of requiring a lock manager. The behavior
should be roughly the same before and after this change for API users
which did not set the lock manager at all (except that a minor memory
leak disappears).
This removes the XP compatibility code, and switches entirely to SWR
locks, which are available starting at Windows Vista.
This removes CRITICAL_SECTION use, which allows us to add
PTHREAD_MUTEX_INITIALIZER, which will be useful later.
Windows XP is hereby not a supported build target anymore. It was
decided in a project vote that this is OK.
Currently http end of chunk is signalled implicitly in hlsenc_io_open().
This mean playlists http writes would have to wait upto a segment duration to signal end of chunk causing delays.
This patch will fix that problem and improve performance.
AVX-512 consists of a plethora of different extensions, but in order to keep
things a bit more manageable we group together the following extensions
under a single baseline cpu flag which should cover SKL-X and future CPUs:
* AVX-512 Foundation (F)
* AVX-512 Conflict Detection Instructions (CD)
* AVX-512 Byte and Word Instructions (BW)
* AVX-512 Doubleword and Quadword Instructions (DQ)
* AVX-512 Vector Length Extensions (VL)
On x86-64 AVX-512 provides 16 additional vector registers, prefer using
those over existing ones since it allows us to avoid using `vzeroupper`
unless more than 16 vector registers are required. They also happen to
be volatile on Windows which means that we don't need to save and restore
existing xmm register contents unless more than 22 vector registers are
required.
Big thanks to Intel for their support.
It should not be needed for each filter that sets sample aspect ratio
to set it explicitly also for each and every frame, instead that is
automatically done in get_buffer call.
Signed-off-by: Paul B Mahol <onemda@gmail.com>
AVERROR_EOF is an internal error which means the http socket is no longer
valid for new requests. It informs the caller that a new connection must
be established, and as such does not need to be surfaced to the user as
a warning.
Signed-off-by: Aman Gupta <aman@tmm1.net>
This fixes a deadlock when using the hls demuxer's new http_persistent feature
to stream a youtube live stream over HTTPS. The youtube servers are http/1.1
compliant, but return a "Connecton: close". Before this commit, the demuxer
would attempt to send a new request on the partially shutdown connection and
cause a deadlock in the tls protocol.
Signed-off-by: Aman Gupta <aman@tmm1.net>
This improves network throughput of the hls demuxer by avoiding
the latency introduced by downloading segments one at a time.
The problem is particularly noticable over high-latency network
connections: for instance, if RTT is 250ms, there will a 250ms idle
period between when one segment response is read and the next one
starts.
The obvious solution to this is to use HTTP pipelining, where a
second request can be sent (on the persistent http/1.1 connection)
before the first response is fully read. Unfortunately the way the
http protocol is implemented in avformat makes implementing pipleining
very complex.
Instead, this commit simulates pipelining using two separate persistent
http connections. This has the advantage of working independently of
the http_persistent option, and can be used with http/1.0 servers as
well. The pair of connections is swapped every time a new segment starts
downloading, and a request for the next segment is sent on the secondary
connection right away. This means the second response will be ready and
waiting by the time the current response is fully read.
Signed-off-by: Aman Gupta <aman@tmm1.net>
Signed-off-by: Anssi Hannula <anssi.hannula@iki.fi>