[…Continued]

#Quad9, #GooglePublicDNS, and my ISP all appeared to respect their capped TTLs; having cache misses when the TTLs reached zero. Unsurprisingly.

I know, both from prior experience and having seen the code, that the on-machine cache respects its TTLs in like manner.

Anyone expecting this (quite conventional) behaviour would be greatly misled by CloudFlare, however.

Quad9 and Google Public DNS were better than #CloudFlare, in retention time or amount of re-population needed to fill every cache behind the anycast; but they with their more aggressive TTL capping got nowhere near as long an interval between cache misses that the on-machine cache has.

CloudFlare, however, in fact incurred cache misses multiple times per hour, at one point fetching anew on *all* of its caches after a mere 10 minute gap when the test was halted. The TTLs never even managed to count down to 41 days before there was a (sometimes global!) cache miss.

#DomainNameSystem #BendersTest

[…Continued]

The pattern is not ideal, because the anycasting is of course determined by moment-to-moment circumstances; but the multiple descending series of TTL values revealed that:

My ISP had at least 3 caches behind 2 apparent IP addresses.

#CloudFlare and #GooglePublicDNS had at least 8 caches behind 2 apparent IP addresses.

#Quad9 had at least 2 caches behind 2 apparent IP addresses, but it was not as simple as 1 cache per IP address. Sometimes they swapped, or gave identical results.

[Continued…] #DomainNameSystem #BendersTest

[…Continued]

Everyone properly counted down the TTLs.

Only the on-machine cache counted down monotonically as expected, however. The others had TTLs that counted down in the long term but jumped up and down in the short term.

There was a discernable pattern, thanks to the 10 second loop interval in my test. There were multiple series of descending TTLs, swapping in and out.

This pattern revealed that there are multiple caches behind anycast, even at my ISP; those caches not sharing data. They each get separately populated during the first few test loop iterations and re-populated.

[Continued…] #DomainNameSystem #CloudFlare #Quad9 #GooglePublicDNS #BendersTest

[…Continued]

The on-machine cache capped the 42 day TTL down to 1 week, as documented.

There was no pressure to evict the resource record set, even though the machine was not dedicated to just the test and other use was being made of the on-machine cache. There was no cache miss at all after the first one.

My ISP's proxy DNS servers also capped the TTL down to 1 week, interestingly.

Only #CloudFlare passed through the original 42 day TTL. The high TTLs might lead one to conclude that CloudFlare thus cached the longest and best. In reality it cached the shortest and worst, more on which in a moment.

#GooglePublicDNS and #Quad9 capped the 42 day TTL the most aggressively, the former reducing to a couple of days, the latter to a mere 12 hours. They turned out to do better than CloudFlare, however.

[Continued…] #DomainNameSystem #BendersTest

[…Continued]

The latency of the on-machine server, the total transaction time, was always in single milliseconds after the single very first cache miss query.

The actual latencies of all of the #Quad9, #GooglePublicDNS, and #CloudFlare public proxy DNS servers were in tens of milliseconds for cache hits.

My ISP's proxy DNS servers are 6 hops away, and also had an actual latency in the tens of milliseconds, but slightly shorter than those of the third-party ones. None of the third-party ones are in fact closer than 7 hops away.

The latency to the relevant content DNS server was in the hundreds of milliseconds, and the latencies of the third-party proxy DNS servers when they had cache misses were between this and twice this.

[Continued…] #DomainNameSystem #BendersTest

If you thought that using a third-party public resolving proxy DNS server gained you economies of scale because you shared a cache with other people, think again.

I ran Bender's Test (https://news.ycombinator.com/item?id=44534938) in a loop, once every 10 seconds, intermittently over a couple of days.

I added an on-machine resolving proxy DNS server on 127.0.0.1, my ISP's proxy DNS servers, and #Quad9's, #GooglePublicDNS's, and #CloudFlare's 2nd IP addresses to Bender's set.

Results reveal that if one conflates latency with cache misses, or claims that there must be better cache hits compared to using one's own proxy DNS server on-machine (or even on-LAN), one hasn't a clue as to the quite different reality of these third-party public DNS servers.

In detail:

[Continued…] #DomainNameSystem #BendersTest

Something else to factor in is the TTL of both NS/A types for each apex domain a... | Hacker News

Rasch Rechtsanwälte will Quad9 zu Netzsperren zwingen

Urheberrecht: Rasch Rechtsanwälte geht statt gegen den Webhoster, Domain-Registrar oder CDN-Dienstleister nun gegen einen DNS-Resolver vor.

Tarnkappe.info
Something is wrong in the Matrix. #GooglePublicDNS #Route53

Quand soudain, le drame. Sur la liste #NANOG, « Google DNS intermittent ServFail for Disney subdomain » Tragique !

#DNS #GooglePublicDNS #Disney