When building for UWP (WindowsStore), additional headers are needed and some functions are not available. This also adds AppVeyor CI/CD support to catch these issues in the future.
Fix By: Deal (@halx99) and Brad House (@bradh352)
For historic reasons, we have users depending on ares_set_servers_*()
to return ARES_SUCCESS when passing no servers and actually *clear*
the server list. It appears they do this for test cases to simulate
DNS unavailable or similar. Presumably they could achieve the same
effect in other ways (point to localhost on a port that isn't in use).
But it seems like this might be wide-spread enough to cause headaches
so we just will document and test for this behavior, clearly it hasn't
caused "issues" for anyone with the old behavior.
See: https://github.com/nodejs/node/pull/50800
Fix By: Brad House (@bradh352)
This PR implements a query cache at the lowest possible level, the actual dns request and response messages. Only successful and `NXDOMAIN` responses are cached. The lowest TTL in the response message determines the cache validity period for the response, and is capped at the configuration value for `qcache_max_ttl`. For `NXDOMAIN` responses, the SOA record is evaluated.
For a query to match the cache, the opcode, flags, and each question's class, type, and name are all evaluated. This is to prevent matching a cached entry for a subtly different query (such as if the RD flag is set on one request and not another).
For things like ares_getaddrinfo() or ares_search() that may spawn multiple queries, each individual message received is cached rather than the overarching response. This makes it possible for one query in the sequence to be purged from the cache while others still return cached results which means there is no chance of ever returning stale data.
We have had a lot of user requests to return TTLs on all the various parsers like `ares_parse_caa_reply()`, and likely this is because they want to implement caching mechanisms of their own, thus this PR should solve those issues as well.
Due to the internal data structures we have these days, this PR is less than 500 lines of new code.
Fixes#608
Fix By: Brad House (@bradh352)
Some users use blacklist files like https://github.com/StevenBlack/hosts which
can contain 200k+ host entries all pointing to 0.0.0.0. Due to the merge
logic in the new hosts processor, all those entries will be associated as
aliases for the same ip address.
The first issue is that it attempts to check the status of all the hosts for
the merged entry, when it should only be checking the new hosts added to the
merged entry, so this caused exponential time as the entries got longer.
The next issue is if searching for one of those hosts, it would append all
the matches as cnames/aliases, but there is zero use for 200k aliases
being appended to a lookup, so we are artificially capping this to 100.
Bug report reference: https://bugs.gentoo.org/917400
Fix By: Brad House (@bradh352)
The retry timeout values were using a fixed calculation which could cause multiple simultaneous queries to timeout and retry at the exact same time. If a DNS server is throttling requests, this could cause the issue to never self-resolve due to all requests recurring at the same instance again.
This PR also creates a maximum timeout option to make sure the random value selected does not exceed this value.
Fix By: Ignat (@Kontakter)