@ermo

There are a much smaller number of people doing SVCB lookups, too. But, interestingly, they are doing them wrongly.

And with a direct correlation to some other abuses.

Which does make me think that, in an ironic twist, it is the bad actors running robot vulnerability probes and scrapers that are the early adopters of SVCB, here.

#djbwares #DomainNameSystem #svcb

@ermo

This does make you the second person in the world (if you picked up the source after I put it in yesterday) who can run

dnsqr https google.com

or even

dnsqr https jdebp.info

I didn't think that people were using this, it only having been accepted in November 2023, but I discovered a few lookups in my logs.

#djbwares #DomainNameSystem #https

Today's #DomainNameSystem monstrosity:

The content DNS servers for vtb.com. respond with an 11KiB answer to an ANY query for vtb.com. This is the biggest amplification attack enabling domain in today's logs.

Coming in second are the content DNS servers for softcom.net., returning a 5KiB response to an ANY query.

@simontatham

I know that you have had a lot of suggestions, over the years and recently. software. is a fair enough choice, and is definitely on-point.

On balance, I am glad that you resisted the lure of putty.party. .

(-:

#PuTTY #DomainNameSystem

@mav @kajer @simontatham

One could also ask why this red flag is not seen for the exchange. "glamour domain". Or for io. and nu. for that matter.

In reality, this lopsided mental model has its roots in ICANN digging its heels in during the 1990s. The fact that ICANN dug its heels in back then is reflective of the idea that many people do not share the idea that ccTLDs and gTLDs are somehow more credible than anything else, especially as we've watched many of the antics played with them over the decades, giving lie to that notion.

Hell, we only have uk. itself because the United Kingdom academic community domain-squatted in 1985. (-:

#DomainNameSystem

Have something to whet your appetites for #djbwares version 11.

If you don't know #djbdns, you probably won't notice what will make people who do know djbdns take interest. (-:

It's also going to contain the FreeBSD 13 build fixes that @ermo helped with.

#DomainNameSystem
#DomainNameSystem

Looking up www.bing.com. nowadays involves dnscache looking up intermediate domain names in org., com., net., and info.; the cross-dependencies of which regularly exceed dnscache's nested gluelessness limit above which it switches to a slower resolution algorithm.

Some quick tests indicate that raising this limit from 2 to 3 improves matters.

So this will be in #djbwares 11.

#djbdns #dnscache #DomainNameSystem

If I were @standupmaths , there would be some Terrible Python Code parsing the output of dnsqr and doing line fitting to the TTL values; and an entire video on how to estimate from such data how many real machines under the covers serve up some seemingly single service on the Internet, and a second channel one on how people did that from the Netcraft uptime graphs that it used to present a couple of decades ago.

And then a clever viewer switching from parsing text from a pipe to some proper Python DNS client library and achieving a 6283% speedup.

(-:

#DomainNameSystem #BendersTest #Python #TerriblePythonCode #StandUpMaths

[…Continued]

#Quad9, #GooglePublicDNS, and my ISP all appeared to respect their capped TTLs; having cache misses when the TTLs reached zero. Unsurprisingly.

I know, both from prior experience and having seen the code, that the on-machine cache respects its TTLs in like manner.

Anyone expecting this (quite conventional) behaviour would be greatly misled by CloudFlare, however.

Quad9 and Google Public DNS were better than #CloudFlare, in retention time or amount of re-population needed to fill every cache behind the anycast; but they with their more aggressive TTL capping got nowhere near as long an interval between cache misses that the on-machine cache has.

CloudFlare, however, in fact incurred cache misses multiple times per hour, at one point fetching anew on *all* of its caches after a mere 10 minute gap when the test was halted. The TTLs never even managed to count down to 41 days before there was a (sometimes global!) cache miss.

#DomainNameSystem #BendersTest

[…Continued]

The pattern is not ideal, because the anycasting is of course determined by moment-to-moment circumstances; but the multiple descending series of TTL values revealed that:

My ISP had at least 3 caches behind 2 apparent IP addresses.

#CloudFlare and #GooglePublicDNS had at least 8 caches behind 2 apparent IP addresses.

#Quad9 had at least 2 caches behind 2 apparent IP addresses, but it was not as simple as 1 cache per IP address. Sometimes they swapped, or gave identical results.

[Continued…] #DomainNameSystem #BendersTest