This avoids unnecessary rebuilds of most source files if only the
list of enabled components has changed, but not the other properties
of the build, set in config.h.
Signed-off-by: Martin Storsjö <martin@martin.st>
Some of these were made possible by moving several common macros to
libavutil/macros.h.
While just at it, also improve the other headers a bit.
Reviewed-by: Martin Storsjö <martin@martin.st>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
For cases with dual stack (IPv4 + IPv6) connectivity, but where one
stack potentially is less reliable, strive to trying to connect over
both protocols in parallel, using whichever address connected first.
In cases with a hostname resolving to multiple IPv4 and IPv6
addresses, the current connection mechanism would try all addresses
in the order returned by getaddrinfo (with all IPv6 addresses ordered
before the IPv4 addresses normally). If connection attempts to the
IPv6 addresses return quickly with an error, this was no problem, but
if they were unsuccessful leading up to timeouts, the connection process
would have to wait for timeouts on all IPv6 target addresses before
attempting any IPv4 address.
Similar to what RFC 8305 suggests, reorder the list of addresses to
try connecting to, interleaving address families. After starting one
connection attempt, start another one in parallel after a small delay
(200 ms as suggested by the RFC).
For cases with unreliable IPv6 but reliable IPv4, this should make
connection attempts work as reliably as with plain IPv4, with only an
extra 200 ms of connection delay.
Signed-off-by: Martin Storsjö <martin@martin.st>
It was sort of optional before - if you didn't call it, networking was
initialized on demand, and an ugly warning was logged. Also, the doxygen
comments threatened that it would be made strictly required one day.
Make it explicitly optional. I would prefer to deprecate it fully, but
there might still be legitimate reasons to use this. But the average
user won't need it.
This is needed only for two reasons: to initialize TLS libraries like
OpenSSL and GnuTLS, and winsock.
OpenSSL and GnuTLS were already silently initialized on demand if the
global network init function was not called. They also have various
thread-safety acrobatics, which make concurrent initialization within
libavformat safe. In addition, the libraries are moving towards making
their global init functions safe, which removes all need for central
global init. In particular, GnuTLS 3.5.16 and OpenSSL 1.1.0g have been
found to have safe init functions. In all cases, they use internal
reference counters to avoid that the global uninit functions interfere
with concurrent uses of the library by other API users who called global
init.
winsock should be thread-safe as well, and maintains an internal
reference counter as well.
Since we still support ancient TLS libraries, which do not have this
fixed, and since it's unknown whether winsock and GnuTLS
reinitialization is costly in any way, don't deprecate the libavformat
functions yet.
It makes no sense to return an error after the first reconnect, and then
somehow resume the next time it's called. Usually this will lead to
demuxer errors. Make reconnecting block instead, until it has either
successfully reconnected, or given up.
Also make the wait reasonably interruptible. Since there is no mechanism
for this in the API, polling is the best we can do. This behaves roughly
the same as other interruptible network functions in libavformat.
(The original code would work if it returned AVERROR(EAGAIN) or so,
which would make retry_transfer_wrapper() repeat the read call. But I
think having an explicit loop for this is better anyway.)
I also snuck in a fix for reconnect_at_eof. It has to check for
AVERROR_EOF, not 0.
TLS is currently implemented over either OpenSSL or GnuTLS, with more
backends likely to appear in the future. Currently, those backend libraries
are part of the protocol names used during e.g. the configure stage of a
build. Hide those details behind a generically-named declaration for the
TLS protocol to avoid leaking those details into the configuration stage.
ff_accept can return AVERROR(ETIMEDOUT) and errno will be 0 (or
undefined), return ret instead and return ff_neterror() in
ff_poll_interrupt instead of AVERROR(errno) to parse WSAGetLastError on
Windows.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
OSX does not know MSG_NOSIGNAL. BSD (which OSX is based on) has got
the socket option SO_NOSIGPIPE (even if modern BSDs also support
MSG_NOSIGNAL).
Signed-off-by: Martin Storsjö <martin@martin.st>
OSX does not know MSG_NOSIGNAL, and provides its own non-standard
mechanism instead. I guess Apple hates standards.
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
Whenever av_gettime() is used to measure relative period of time,
av_gettime_relative() is prefered as it guarantee monotonic time
on supported platforms.
Signed-off-by: Olivier Langlois <olivier@trillion01.com>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
This supports non-Linux systems (SOCK_CLOEXEC is non-standard) and
older Linux kernels to the extent possible.
Signed-off-by: Martin Storsjö <martin@martin.st>
This lowers the level of warnings printed if trying to connect
to a host name that provides both v6 and v4 addresses but the
service only is available on the v4 address (often occurring for
'localhost', with servers that aren't v6-aware).
Signed-off-by: Martin Storsjö <martin@martin.st>
HAVE_THREADS is set in config.h if pthreads or w32threads is
available, which presumably the proper condition here.
Also fixes undefined behaviour in preprocessor directives.
Signed-off-by: Mans Rullgard <mans@mansr.com>
Since the errno.h values don't match the error codes that winsock
returns, map the winsock error codes to the errno ones, to make
sure explicit checks against AVERROR(x) match.
Signed-off-by: Martin Storsjö <martin@martin.st>