The previous implementation would redirect a query to a failed server
based on a timeout and random chance per query. This could lead to
issues of having to deal with server timeout scenarios when the server
isn't back online yet causing latency issues. Instead, we should
continue to use the known good servers for the query itself, but spawn a
second query with the same question to a different downed server. That
query will be able to be processed in the background and potentially
bring the server back online.
Also, when using the `rotate` option, servers were previously chosen at
random from the complete list. This PR changes that to choose only from
the servers that share the same highest priority.
Authored-By: Brad House (@bradh352)
As per #852 searching is failing, partially it is due to the ndots value
not defaulting to a proper value on linux, and partially due to
systemd-resolved returning the wrong error codes.
This PR fixes the first issue and adds containerized test cases to
validate the behavior and prevent issues in the future.
Reported-By: Hans-Christian Egtvedt (@egtvedt) and Mikael Lindemann(@mikaellindemann)
Authored-By: Brad House (@bradh352)
If a blank DNS name is used, the DNS query cache would fail due to an
invalid sanity check. This can be legitimate such as:
adig -t SOA .
This fixes that situation as well as a few other spots that were
uncovered and adds a test case to validate the behavior to ensure
it won't regress in the future.
Fixes#858
Reported-By: Nodar Chkuaselidze (@nodech)
Authored-By: Brad House (@bradh352)
On CI systems that can be overloaded, things like usleep() and select()
may not closely honor their timeouts, and can often be a multiple of the
requested timeout. Some tests, out of necessity need to rely on accurate
timing in order to test timeout conditions so this means test failures
when the skew is too large. Short of increasing timeouts to a point that
would make tests take an unreasonable amount of time, the alternative is
to make the OS honor the requested timeout more accurately.
On MacOS this means to set a realtime thread priority for the tests.
Other projects like libuv do this same thing. The code is taken from:
https://developer.apple.com/library/archive/technotes/tn2169/_index.html
Authored-By: Brad House (@bradh352)
TCP Fast Open (TFO) allows TCP connection establishment in 0-RTT when a
client and server have previously communicated. The SYN packet will also
contain the initial data packet from the client to the server. This
means there should be virtually no slowdown over UDP when both sides
support TCP FastOpen, which is unfortunately not always the case. For
instance, `1.1.1.1` appears to support TFO, however `8.8.8.8` does not.
This implementation supports Linux, Android, FreeBSD, MacOS, and iOS.
While Windows does have support for TCP FastOpen it does so via
completion APIs only, and that can't be used with polling APIs like used
by every other OS. We could implement it in the future if desired for
those using `ARES_OPT_EVENT_THREAD`, but it would probably require
adopting IOCP completely on Windows.
Sysctls are required to be set appropriately:
- Linux: `net.ipv4.tcp_fastopen`:
- `1` = client only (typically default)
- `2` = server only
- `3` = client and server
- MacOS: `net.inet.tcp.fastopen`
- `1` = client only
- `2` = server only
- `3` = client and server (typically default)
- FreeBSD: `net.inet.tcp.fastopen.server_enable` (boolean) and
`net.inet.tcp.fastopen.client_enable` (boolean)
This feature is always-on, when running on an OS with the capability
enabled. Though some middleboxes have impacted end-to-end TFO and caused
connectivity errors, all modern OSs perform automatic blackholing of IPs
that have issues with TFO. It is not expected this to cause any issues
in the modern day implementations.
This will also help with improving latency for future DoT and DoH
implementations.
Authored-By: Brad House (@bradh352)
During the implementation of server cookies, test cases were missing
to validate the server cookie in a prior reply was passed back, and
it turns out they were not.
This also adds tests for verification.
Fix By: Brad House (@bradh352)
DNS cookies are a simple form of learned mutual authentication supported
by most DNS server implementations these days and can help prevent DNS
Cache Poisoning attacks for clients and DNS amplification attacks for
servers.
Fixes#620
Fix By: Brad House (@bradh352)
In c-ares 1.30.0 we started validating strings parsed are printable.
This caused a regression in a pycares test case due to a wrong response
code being returned as the error was being propagated from a different
section of code that was assuming the only possible failure condition
was out-of-memory.
This PR adds a fix for this and also a test case to validate it.
Ref: https://github.com/saghul/pycares/issues/200
Fix By: Brad House (@bradh352)
This PR enables DNS 0x20 as per
https://datatracker.ietf.org/doc/html/draft-vixie-dnsext-dns0x20-00 .
DNS 0x20 adds additional entropy to the request by randomly altering the
case of the DNS question to help prevent cache poisoning attacks.
Google DNS has implemented this support as of 2023, even though this is
a proposed and expired standard from 2008:
https://groups.google.com/g/public-dns-discuss/c/KxIDPOydA5M
There have been documented cases of name server and caching server
non-conformance, though it is expected to become more rare, especially
since Google has started using this.
This can be enabled via the `ARES_FLAG_DNS0x20` flag, which is currently
disabled by default. The test cases do however enable this flag to
validate this feature.
Implementors using this flag will notice that responses will retain the
mixed case, but since DNS names are case-insensitive, any proper
implementation should not be impacted.
There is currently no fallback mechanism implemented as it isn't
immediately clear how this may affect a stub resolver like c-ares where
we aren't querying the authoritative name server, but instead an
intermediate recursive resolver where some domains may return invalid
results while others return valid results, all while querying the same
nameserver. Likely using DNS cookies as suggested by #620 is a better
mechanism to fight cache poisoning attacks for stub resolvers.
TCP queries do not use this feature even if the `ARES_FLAG_DNS0x20` flag
is specified since they are not subject to cache poisoning attacks.
Fixes Issue: #795
Fix By: Brad House (@bradh352)
MSVC has been building with /W3 which isn't considered a safe level for
modern code. /W4 is recommended, but it too is lacking some recommended
options, so we enable /W4 and also the recommended options. We do,
however, have to disable a couple of options due to Windows headers not
being fully compliant sometimes as well as some things we do in c-ares
that it doesn't like, but aren't actually bad.
Fix By: Brad House (@bradh352)
**Summary**
This PR adds a server state callback that is invoked whenever a query to
a DNS server finishes.
The callback is invoked with the server details (as a string), a boolean
indicating whether the query succeeded or failed, flags describing the
query (currently just indicating whether TCP or UDP was used), and
custom userdata.
This can be used by user applications to gain observability into DNS
server health and usage. For example, alerts when a DNS server
fails/recovers or metrics to track how often a DNS server is used and
responds successfully.
**Testing**
Three new regression tests `MockChannelTest.ServStateCallback*` have
been added to test the new callback in different success/failure
scenarios.
Fix By: Oliver Welsh (@oliverwelsh)
Improve reliability in the server retry delay regression tests by
increasing the retry delay and sleeping for a little more than the retry
delay when attempting to force retries.
This helps to account for unreliable timing (e.g. NTP slew)
intermittently breaking pipelines.
Fix By: Oliver Welsh (@oliverwelsh)
**Summary**
By default c-ares will select the server with the least number of
consecutive failures when sending a query. However, this means that if a
server temporarily goes down and hits failures (e.g. a transient network
issue), then that server will never be retried until all other servers
hit the same number of failures.
This is an issue if the failed server is preferred to other servers in
the list. For example if a primary server and a backup server are
configured.
This PR adds new server failover retry behavior, where failed servers
are retried with small probability after a minimum delay has passed. The
probability and minimum delay are configurable via the
`ARES_OPT_SERVER_FAILOVER` option. By default c-ares will use a
probability of 10% and a minimum delay of 5 seconds.
In addition, this PR includes a small change to always close out
connections to servers which have hit failures, even with
`ARES_FLAG_STAYOPEN`. It's possible that resetting the connection can
resolve some server issues (e.g. by resetting the source port).
**Testing**
A new set of regression tests have been added to test the new server
failover retry behavior.
Fixes Issue: #717
Fix By: Oliver Welsh (@oliverwelsh)
This PR adds a new function `ares_search_dnsrec()` to search for records
using the new DNS record parser.
The function takes an arbitrary DNS record object to search (that must
represent a query for a single name). The function takes a new callback
type, `ares_callback_dnsrec`, that is invoked with a parsed DNS record
object rather than the raw buffer(+length).
The original motivation for this change is to provide support for
[draft-kaplan-enum-sip-routing-04](https://datatracker.ietf.org/doc/html/draft-kaplan-enum-sip-routing-04);
when routing phone calls using an ENUM server, it can be useful to
include identifying source information in an OPT RR options value, to
help select the appropriate route for the call. The new function allows
for more customisable searches like this.
**Summary of code changes**
A new function `ares_search_dnsrec()` has been added and exposed.
Moreover, the entire `ares_search_int()` internal code flow has been
refactored to use parsed DNS record objects and the new DNS record
parser. The DNS record object is passed through the `search_query`
structure by encoding/decoding to/from a buffer (if multiple search
domains are used). A helper function `ares_dns_write_query_altname()` is
used to re-write the DNS record object with a new query name (used to
append search domains).
`ares_search()` is now a wrapper around the new internal code, where the
DNS record object is created based on the name, class and type
parameters.
The new function uses a new callback type, `ares_callback_dnsrec`. This
is invoked with a parsed DNS record object. For now, we convert from
`ares_callback` to this new type using `ares__dnsrec_convert_cb()`.
Some functions that are common to both `ares_query()` and
`ares_search()` have been refactored using the new DNS record parser.
See `ares_dns_record_create_query()` and
`ares_dns_query_reply_tostatus()`.
**Testing**
A new FV has been added to test the new function, which searches for a
DNS record containing an OPT RR with custom options value.
As part of this, I needed to enhance the mock DNS server to expect
request text (and assert that it matches actual request text). This is
because the FV needs to check that the request contains the correct OPT
RR.
**Documentation**
The man page docs have been updated to describe the new feature.
**Futures**
In the future, a new variant of `ares_send()` could be introduced in the
same vein (`ares_send_dnsrec()`). This could be used by
`ares_search_dnsrec()`. Moreover, we could migrate internal code to use
`ares_callback_dnsrec` as the default callback.
This will help to make the new DNS record parser the norm in C-Ares.
---------
Co-authored-by: Oliver Welsh (@oliverwelsh)
Regression introduced in 1.26.0, building c-ares with threading disabled results in ares_init{_options}() failing.
Also adds a new CI test case to prevent this regression in the future.
Fixes Bug: #699
Fix By: Brad House (@bradh352)
This PR implements an event thread to process all events on file descriptors registered by c-ares. Prior to this feature, integrators were required to understand the internals of c-ares and how to monitor file descriptors and timeouts and process events.
Implements OS-specific efficient polling such as epoll(), kqueue(), or IOCP, and falls back to poll() or select() if otherwise unsupported. At this point, it depends on basic threading primitives such as pthreads or windows threads.
If enabled via the ARES_OPT_EVENT_THREAD option passed to ares_init_options(), then socket callbacks cannot be used.
Fixes Bug: #611
Fix By: Brad House (@bradh352)
New test cases depend on internal symbols for calculating timeouts.
Disable those test features if symbol hiding is enabled.
Fixes Bug: #664
Fix By: Brad House (@bradh352)
When doing ares_gethostbyname() or ares_getaddrinfo() with AF_UNSPEC, if ares_cancel() was called after one address class was returned but before the other address class, it would return ARES_SUCCESS rather than ARES_ECANCELLED.
Test case has been added for this specific condition.
Fixes Bug: #662
Fix By: Brad House (@bradh352)
As of c-ares 1.22.0, server timeouts were erroneously not incrementing server failures meaning the server in use wouldn't rotate. There was apparently never a test case for this condition.
This PR fixes the bug and adds a test case to ensure it behaves properly.
Fixes Bug: #650
Fix By: Brad House (@bradh352)
This PR implements a query cache at the lowest possible level, the actual dns request and response messages. Only successful and `NXDOMAIN` responses are cached. The lowest TTL in the response message determines the cache validity period for the response, and is capped at the configuration value for `qcache_max_ttl`. For `NXDOMAIN` responses, the SOA record is evaluated.
For a query to match the cache, the opcode, flags, and each question's class, type, and name are all evaluated. This is to prevent matching a cached entry for a subtly different query (such as if the RD flag is set on one request and not another).
For things like ares_getaddrinfo() or ares_search() that may spawn multiple queries, each individual message received is cached rather than the overarching response. This makes it possible for one query in the sequence to be purged from the cache while others still return cached results which means there is no chance of ever returning stale data.
We have had a lot of user requests to return TTLs on all the various parsers like `ares_parse_caa_reply()`, and likely this is because they want to implement caching mechanisms of their own, thus this PR should solve those issues as well.
Due to the internal data structures we have these days, this PR is less than 500 lines of new code.
Fixes#608
Fix By: Brad House (@bradh352)
This PR implements ares_reinit() to safely reload a channel's configuration even if there are existing queries. This function can be called when system configuration is detected to be changed, however since c-ares isn't thread aware, care must be taken to ensure no other c-ares calls are in progress at the time this function is called. Also, this function may update the open file descriptor list so care must also be taken to wake any event loops and reprocess the list of file descriptors.
Fixes Bug #301
Fix By: Brad House (@bradh352)
This PR makes the server list a dynamic sorted list of servers. The sort order is [ consecutive failures, system config index ]. The server list can be updated via ares_set_servers_*(). Any queries currently directed to servers that are no longer in the list will be automatically re-queued to a different server.
Also, any time a failure occurs on the server, the sort order of the servers will be updated so that the one with the fewest consecutive failures is chosen for the next query that goes on the wire, this way bad or non-responsive servers are automatically isolated.
Since the server list is now dynamic, the tracking of query failures per server has been removed and instead is relying on the server sort order as previously described. This simplifies the logic while also reducing the amount of memory required per query. However, because of this dynamic nature, it may not be easy to determine the server attempt order for enqueued queries if there have been any failures.
If using the ARES_OPT_ROTATE, this is now implemented to be a random selection of the configured servers. Since the server list is dynamic, its not possible to go to the next server as configuration could have changed between queries or attempts for the same query.
Finally, this PR moved some existing functions into new files to logically separate them.
This should address issues #550 and #440, while also setting the framework to implement #301. #301 needs a little more effort since it configures things other than the servers themselves (domains, search, sortlist, lookups), which need to make sure they can be safely updated.
Fix By: Brad House (@bradh352)
As per #266, TCP queries are basically broken. If we get a partial reply, things just don't work, but unlike UDP, TCP may get fragmented and we need to properly handle that.
I've started creating a basic parser/buffer framework for c-ares for memory safety reasons, but it also helps for things like this where we shouldn't be manually tracking positions and fetching only a couple of bytes at a time from a socket. This parser/buffer will be expanded and used more in the future.
This also resolves#206 by allowing NULL to be specified for some socket callbacks so they will auto-route to the built-in c-ares functions.
Fixes: #206, #266
Fix By: Brad House (@bradh352)
As per #541, when using AF_UNSPEC with ares_getaddrinfo() (and in turn with ares_gethostbynam()) if we receive a successful response for one address class, we should not allow the other address class to continue on with retries, just return the address class we have.
This will limit the overall query time to whatever timeout remains for the pending query for the other address class, it will not, however, terminate the other query as it may still prove to be successful (possibly coming in less than a millisecond later) and we'd want that result still. It just turns off additional error processing to get the result back quicker.
Fixes Bug: #541
Fix By: Brad House (@bradh352)
Add a new ARES_OPT_UDP_MAX_QUERIES option with udp_max_queries parameter that can be passed to ares_init_options(). This value defaults to 0 (unlimited) to maintain existing compatibility, any positive number will cause new UDP ephemeral ports to be created once the threshold is reached, we'll call these 'connections' even though its technically wrong for UDP.
Implementation Details:
* Each server entry in a channel now has a linked-list of connections/ports for udp and tcp. The first connection in the list is the one most likely to be eligible to accept new queries.
* Queries are now tracked by connection rather than by server.
* Every time a query is detached from a connection, the connection that it was attached to will be checked to see if it needs to be cleaned up.
* Insertion, lookup, and searching for connections has been implemented as O(1) complexity so the number of connections will not impact performance.
* Remove is_broken from the server, it appears it would be set and immediately unset, so must have been invalidated via a prior patch. A future patch should probably track consecutive server errors and de-prioritize such servers. The code right now will always try servers in the order of configuration, so a bad server in the list will always be tried and may rely on timeout logic to try the next.
* Various other cleanups to remove code duplication and for clarification.
Fixes Bug: #444
Fix By: Brad House (@bradh352)
c-ares currently lacks modern data structures that can make coding easier and more efficient. This PR implements a new linked list, skip list (sorted linked list), and hashtable implementation that are easy to use and hard to misuse. Though these implementations use more memory allocations than the prior implementation, the ability to more rapidly iterate on the codebase is a bigger win than any marginal performance difference (which is unlikely to be visible, modern systems are much more powerful than when c-ares was initially created).
The data structure implementation favors readability and audit-ability over performance, however using the algorithmically correct data type for the purpose should offset any perceived losses.
The primary motivation for this PR is to facilitate future implementation for Issues #444, #135, #458, and possibly #301
A couple additional notes:
The ares_timeout() function is now O(1) complexity instead of O(n) due to the use of a skiplist.
Some obscure bugs were uncovered which were actually being incorrectly validated in the test cases. These have been addressed in this PR but are not explicitly discussed.
Fixed some dead code warnings in ares_rand for systems that don't need rc4
Fix By: Brad House (@bradh352)
All files have their licence and copyright information clearly
identifiable. If not in the file header, they are set separately in
.reuse/dep5.
All used license texts are provided in LICENSES/
ares_gethostbyname() and ares_getaddrinfo() do a lot of similar things, however ares_getaddrinfo() has some desirable behaviors that should be imported into ares_gethostbyname(). For one, it sorts the address lists for the most likely to succeed based on the current system routes. Next, when AF_UNSPEC is specified, it properly handles search lists instead of first searching all of AF_INET6 then AF_INET, since ares_gethostbyname() searches in parallel. Therefore, this PR should also resolve the issues attempted in #94.
A few things this PR does:
1. ares_parse_a_reply() and ares_parse_aaaa_reply() had very similar code to translate struct ares_addrinfo into a struct hostent as well as into struct ares_addrttl/ares_addr6ttl this has been split out into helper functions of ares__addrinfo2hostent() and ares__addrinfo2addrttl() to prevent this duplicative code.
2. ares_getaddrinfo() was apparently never honoring HOSTALIASES, and this was discovered once ares_gethostbyname() was turned into a wrapper, the affected test cases started failing.
3. A slight API modification to save the query hostname into struct ares_addrinfo as the last element of name. Since this is the last element, and all user-level instances of struct ares_addrinfo are allocated internally by c-ares, this is not an ABI-breaking change nor would it impact any API compatibility. This was needed since struct hostent has an h_name element.
4. Test Framework: MockServer tests via TCP would fail if more than 1 request was received at a time which is common when ares_getaddrinfo() queries for both A and AAAA records simultaneously. Infact, this was a long standing issue in which the ares_getaddrinfo() test were bypassing TCP alltogether. This has been corrected, the message is now processed in a loop.
5. Some tests had to be updated for overall correctness as they were invalid but somehow passing prior to this change.
Change By: Brad House (@bradh352)
EDNS retry should be based on FORMERR returned without an OPT RR record as per https://tools.ietf.org/html/rfc6891#section-7 rather than just treating any unexpected error condition as a reason to disable EDNS on the channel.
Fix By: Erik Lax (@eriklax)
The rc4 function iterates over a buffer of size buffer_len who's maximum
value is INT_MAX with a counter of type short that is not guaranteed to
have maximum size INT_MAX.
In circumstances where short is narrower than int and where buffer_len
is larger than the maximum value of a short, it may be possible to loop
infinitely as counter will overflow and never be greater than or equal
to buffer_len.
The solution is to make the comparison be between types of equal width.
This commit defines counter as an int.
Fix By: Fionn Fitzmaurice (@fionn)