This PR makes the c-ares parser introduced in 1.21, and the new writer, along with associated helpers public. These helpers are contained in a new public header of `ares_dns_record.h` which should _**not**_ be included directly, instead simply including `ares.h` is sufficient. This will address #587, as well as #470.
A follow-up PR will be made which will transform `adig` to use the new parsers and helpers.
This PR does not currently add man pages for these public functions, that will be in a follow-up PR once the `adig` migration is done which may expose additional needed helpers.
The two aforementioned PRs will be done before the 1.22 release.
Fix By: Brad House (@bradh352)
The OPT RR record has some seldom used options with a 16bit key and a binary value. The current parser and writer was not supporting this. This PR adds support. The same format is also used for SVCB/HTTPS records, so getting this in there is necessary to support that RR type.
Also, we split the Binary record format into BIN and BINP, where BINP is an indicator that the binary data is _likely_ printable and will guarantee a NULL terminator. This is helpful for those attempting to print RRs.
Fix By: Brad House (@bradh352)
As per #470, c-ares is missing a parser for the TLSA record format (RFC 6698). This PR introduces that parser.
Once the new parser interface becomes public and this PR is merged, then #470 can be closed.
Fix By: Brad House (@bradh352)
The `ares_dns_record_t` data structure created in the prior release is capable of holding a complete parsed DNS message and also provides all helpers in order to fill in the data structure. This PR adds write capabilities for this data structure to form a complete message and supports features such as DNS name compression as defined in RFC1035. Though this message writing capability goes further than c-ares internally needs, external users may find it useful ... and we may find it useful for test validation as well.
This also replaces the existing message writing code in `ares_create_query()`, as well rewriting the request message without EDNS in ares_process.c's `process_answer()`.
Fix By: Brad House (@bradh352)
All DNS servers support EDNS, by using this by default, it will allow larger responses without the need to switch to TCP. If by chance a DNS server is hit that doesn't support EDNS, this is detected due to the lack of the OPT RR in the response and will be automatically retried without EDNS.
Fix By: Brad House (@bradh352)
`ares_channel` is defined as `typedef struct ares_channeldata *ares_channel;`. The problem with this, is it embeds the pointer into the typedef, which means an `ares_channel` can never be declared as `const` as if you write `const ares_channel channel`, that expands to `struct ares_channeldata * const ares_channel` and not `const struct ares_channeldata *channel`.
We will now typedef `ares_channel_t` as `typedef struct ares_channeldata ares_channel_t;`, so if you write `const ares_channel_t *channel`, it properly expands to `const struct ares_channeldata *channel`.
We are maintaining the old typedef for API compatibility with existing integrations, and due to typedef expansion this should not even cause any compiler warnings for existing code. There are no ABI implications with this change. I could be convinced to keep existing public functions as `ares_channel` if a sufficient argument exists, but internally we really need make this change for modern best practices.
This change will allow us to internally use `const ares_channel_t *` where appropriate. Whether or not we decide to change any public interfaces to use `const` may require further discussion on if there might be ABI implications (I don't think so, but I'm also not 100% sure what a compiler internally does with `const` when emitting machine code ... I think more likely ABI implications would occur going the opposite direction).
FYI, This PR was done via a combination of sed and clang-format, the only manual code change was the addition of the new typedef, and a couple doc fixes :)
Fix By: Brad House (@bradh352)
This PR makes the server list a dynamic sorted list of servers. The sort order is [ consecutive failures, system config index ]. The server list can be updated via ares_set_servers_*(). Any queries currently directed to servers that are no longer in the list will be automatically re-queued to a different server.
Also, any time a failure occurs on the server, the sort order of the servers will be updated so that the one with the fewest consecutive failures is chosen for the next query that goes on the wire, this way bad or non-responsive servers are automatically isolated.
Since the server list is now dynamic, the tracking of query failures per server has been removed and instead is relying on the server sort order as previously described. This simplifies the logic while also reducing the amount of memory required per query. However, because of this dynamic nature, it may not be easy to determine the server attempt order for enqueued queries if there have been any failures.
If using the ARES_OPT_ROTATE, this is now implemented to be a random selection of the configured servers. Since the server list is dynamic, its not possible to go to the next server as configuration could have changed between queries or attempts for the same query.
Finally, this PR moved some existing functions into new files to logically separate them.
This should address issues #550 and #440, while also setting the framework to implement #301. #301 needs a little more effort since it configures things other than the servers themselves (domains, search, sortlist, lookups), which need to make sure they can be safely updated.
Fix By: Brad House (@bradh352)
AppVeyor was using Visual Studio 2015 along with old versions of MinGW. Update to the latest AppVeyor provides and also add an MSYS2 build test using MinGW which will use the bleeding edge version.
When researching #590 this also uncovered a bug in cmake not properly detecting if_indextoname() on windows. This has been corrected as well as the underlying issue reported in #590.
Fix By: Brad House (@bradh352) and Jonas Kvinge (@jonaski)
HOSTS FILE PROCESSING OVERVIEW
==============================
The hosts file on the system contains static entries to be processed locally
rather than querying the nameserver. Each row is an IP address followed by
a list of space delimited hostnames that match the ip address. This is used
for both forward and reverse lookups.
We are caching the entire parsed hosts file for performance reasons. Some
files may be quite sizable and as per Issue #458 can approach 1/2MB in size,
and the parse overhead on a rapid succession of queries can be quite large.
The entries are stored in forwards and backwards hashtables so we can get
O(1) performance on lookup. The file is cached until the file modification
timestamp changes (or 60s if there is no implemented stat() capability).
The hosts file processing is quite unique. It has to merge all related hosts
and ips into a single entry due to file formatting requirements. For
instance take the below:
```
127.0.0.1 localhost.localdomain localhost
::1 localhost.localdomain localhost
192.168.1.1 host.example.com host
192.168.1.5 host.example.com host
2620🔢:1 host.example.com host6.example.com host6 host
```
This will yield 2 entries.
1) ips: `127.0.0.1,::1`
hosts: `localhost.localdomain,localhost`
2) ips: `192.168.1.1,192.168.1.5,2620🔢:1`
hosts: `host.example.com,host,host6.example.com,host6`
It could be argued that if searching for `192.168.1.1` that the `host6`
hostnames should not be returned, but this implementation will return them
since they are related (both ips have the fqdn of host.example.com). It is
unlikely this will matter in the real world.
Fix By: Brad House (@bradh352)