There is no POSIX error code for EOF - recv() signals EOF by simply
returning 0. But libavformat recently changed its conventions and
requires an explicit AVERROR_EOF, or it might get into an endless retry
loop, consuming 100% CPU while doing nothing.
This can reduce latency and increase throughput, particularly on high
latency networks.
Signed-off-by: Aman Gupta <aman@tmm1.net>
Reviewed-by: Jeyapal, Karthick <kjeyapal@akamai.com>
ff_accept can return AVERROR(ETIMEDOUT) and errno will be 0 (or
undefined), return ret instead and return ff_neterror() in
ff_poll_interrupt instead of AVERROR(errno) to parse WSAGetLastError on
Windows.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This commit optimizes HTTP performance by reducing forward seeks, instead
favoring a read-ahead and discard on the current connection (referred to
as a short seek) for seeks that are within a TCP window's worth of data.
This improves performance because with TCP flow control, a window's worth
of data will be in the local socket buffer already or in-flight from the
sender once congestion control on the sender is fully utilizing the window.
Note: this approach doesn't attempt to differentiate from a newly opened
connection which may not be fully utilizing the window due to congestion
control vs one that is. The receiver can't get at this information, so we
assume worst case; that full window is in use (we did advertise it after all)
and that data could be in-flight
The previous behavior of closing the connection, then opening a new
with a new HTTP range value results in a massive amounts of discarded
and re-sent data when large TCP windows are used. This has been observed
on MacOS/iOS which starts with an initial window of 256KB and grows up to
1MB depending on the bandwidth-product delay.
When seeking within a window's worth of data and we close the connection,
then open a new one within the same window's worth of data, we discard
from the current offset till the end of the window. Then on the new
connection the server ends up re-sending the previous data from new
offset till the end of old window.
Example (assumes full window utilization):
TCP window size: 64KB
Position: 32KB
Forward seek position: 40KB
* (Next window)
32KB |--------------| 96KB |---------------| 160KB
*
40KB |---------------| 104KB
Re-sent amount: 96KB - 40KB = 56KB
For a real world test example, I have MP4 file of ~25MB, which ffplay
only reads ~16MB and performs 177 seeks. With current ffmpeg, this results
in 177 HTTP GETs and ~73MB worth of TCP data communication. With this
patch, ffmpeg issues 4 HTTP GETs and 3 seeks for a total of ~22MB of TCP data
communication.
To support this feature, the short seek logic in avio_seek() has been
extended to call a function to get the short seek threshold value. This
callback has been plumbed to the URLProtocol structure, which now has
infrastructure in HTTP and TCP to get the underlying receiver window size
via SO_RCVBUF. If the underlying URL and protocol don't support returning
a short seek threshold, the default s->short_seek_threshold is used
This feature has been tested on Windows 7 and MacOS/iOS. Windows support
is slightly complicated by the fact that when TCP window auto-tuning is
enabled, SO_RCVBUF doesn't report the real window size, but it does if
SO_RCVBUF was manually set (disabling auto-tuning). So we can only use
this optimization on Windows in the later case
Signed-off-by: Joel Cunningham <joel.cunningham@me.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
From e24d95c0e06a878d401ee34fd6742fcaddeeb95f Mon Sep 17 00:00:00 2001
From: Joel Cunningham <joel.cunningham@me.com>
Date: Mon, 9 Jan 2017 13:37:51 -0600
Subject: [PATCH] tcp: set socket buffer sizes before listen/connect/accept
Attempting to set SO_RCVBUF and SO_SNDBUF on TCP sockets after connection
establishment is incorrect and some stacks ignore the set call on the socket at
this point. This has been observed on MacOS/iOS. Windows 7 has some peculiar
behavior where setting SO_RCVBUF after applies only if the buffer is increasing
from the default while decreases are ignored. This is possibly how the incorrect
usage has gone unnoticed
Unix Network Programming Vol. 1: The Sockets Networking API (3rd edition, seciton 7.5):
"When setting the size of the TCP socket receive buffer, the ordering of the
function calls is important. This is because of TCP's window scale option,
which is exchanged with the peer on SYN segments when the connection is
established. For a client, this means the SO_RCVBUF socket option must be
set before calling connect. For a server, this means the socket option must
be set for the listening socket before calling listen. Setting this option
for the connected socket will have no effect whatsoever on the possible window
scale option because accept does not return with the connected socket until
TCP's three-way handshake is complete. This is why the option must be set on
the listening socket. (The sizes of the socket buffers are always inherited from
the listening socket by the newly created connected socket)"
Signed-off-by: Joel Cunningham <joel.cunningham@me.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes: error: dereferencing pointer to incomplete type
Tested-by: Dave Yeo <daveryeo@telus.net>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Apply the default value for timeout in code instead of via the
avoption, to allow distinguishing the default value from the user
not setting anything at all.
Signed-off-by: Martin Storsjö <martin@martin.st>
Instead of a linked list constructed at av_register_all(), store them
in a constant array of pointers.
Since no registration is necessary now, this removes some global state
from lavf. This will also allow the urlprotocol layer caller to limit
the available protocols in a simple and flexible way in the following
commits.
adds two new options that may be set via the dictionary:
- send_buffer_size
- recv_buffer_size
When present, setsockopt() is used with SO_SNDBUF and SO_RCVBUF to set
socket buffer sizes. I chose to make send and receive independent
because buffering requirements are often asymmetric.
Errors in setting the buffer size mean the socket will use its
default, so they are ignored.
There is no sanity checking on values, as the kernel/socket layers
already impose reasonable limits if asked for something crazy.
Rationale for enlarging receive buffers is to reduce susceptibility
to intermittent network delays/congestion. I added setting the send
buffer for symmetry.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
If we try to listen on TCP port and ff_listen() fails on
interrupt callback socket (bind) descriptor overwrites and
does not closed at all.
As a result, we can't rebind to the same port.
Reviewed-by: Stephan Holljes <klaxa1337@googlemail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
If the remote end of a connection oriented socket hangs up, generating
an EPIPE error is preferable over an unhandled SIGPIPE signal.
Signed-off-by: Martin Storsjö <martin@martin.st>
This fixes warnings about making integers from pointers without
a cast, and avoids the theoretical case where the lower 32 bits of
the pointer would all be zero where the implicit cast wouldn't give
the right result.
Signed-off-by: Martin Storsjö <martin@martin.st>
This lowers the level of warnings printed if trying to connect
to a host name that provides both v6 and v4 addresses but the
service only is available on the v4 address (often occurring for
'localhost', with servers that aren't v6-aware).
Signed-off-by: Martin Storsjö <martin@martin.st>
This should be closer to how tcp behaved longer ago and should
fix the issue with idle connections timing out.
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
Without this patch a user a bit absent-minded may not notice that
the connection doesn't work because the port is missing.
Signed-off-by: Martin Storsjö <martin@martin.st>
This gives you the proper v4 or v6 version of the "any address",
allowing receiving connections on any address on the machine.
Signed-off-by: Martin Storsjö <martin@martin.st>
This heaader is required for close() for sockets in network
code. For winsock, the equivalent function is defined in the
winsock2.h header.
This avoids having the HAVE_UNISTD_H in all files dealing with
raw sockets.
Signed-off-by: Martin Storsjö <martin@martin.st>
Also use ff_neterrno() instead of errno directly (which doesn't work
on windows), for getting the error code.
Signed-off-by: Martin Storsjö <martin@martin.st>
tcp_shutdown() isn't needed at the moment, but is added for
consistency to explain how the function is supposed to be used.
Signed-off-by: Martin Storsjö <martin@martin.st>
This definition is in two files, since the definitions will move
to the private header at the next bump.
Signed-off-by: Martin Storsjö <martin@martin.st>
This simplifies the open functions by avoiding one function
call that needs error checking, reducing the amount of
extra bulk code.
Signed-off-by: Martin Storsjö <martin@martin.st>