#DomainNameSystem

2025-11-26

@cks @lanodan

Missing from @drscriptt 's list are AAAA, HTTPS, and SVCB records.

AAAA has plenty of obvious choices.

You'll know the . convention for SRV, SVCB, and MX resource record sets, of course.

I shall just drop in my personal experience from earlier this year that an accidentally supplied HTTPS resource record can *definitely* break WWW traffic; because browsers in practice do not obey RFC9460 §2.4.2.

#djbdns
#DomainNameSystem
#SplitHorizon
#ReservedSuperDomains #DNS #HTTPS #SVCB

2025-11-26

@cks @lanodan @drscriptt

There are actually quite a few, nowadays. See RFCs 6762, 7686, and 8375.

example. is not the worst choice, although you could have gone with test. or internal. or intranet. .

Given your objective, any of the further ones that imply a residence or a corporation seem less well suited.

Although home.arpa.'s public delegation to the blackhole-{1,2}.iana.org. names is re-used.

github.com/jdebp/nosh/blob/tru

#djbdns #DomainNameSystem #SplitHorizon #ReservedSuperDomains #DNS

Marcel SIneM(S)USsimsus@social.tchncs.de
2025-11-24
Marcel SIneM(S)USsimsus@social.tchncs.de
2025-10-22

Hier sieht man, wie viele Dienste #AWS nutzen ...

#AmazonWebServices: Globale Störung am Montagmorgen | heise online heise.de/news/Amazon-Web-Servi #DNS #DomainNameSystem #Amazon

2025-08-28

@pmevzek

Thank you. But it actually tells me less than I already knew from cranking up developer tools and the server logs, and using direct observation. Which is, as I said, that the browsers continued to communicate with the origin domain's IP addresses, and did not switch to the target domain's.

#HTTPS alias-mode resource records that nominally upgrade from HTTP to HTTPS actually instead locked the browsers on a Windows machine out of my own WWW site for half a day.

#DomainNameSystem

2025-08-27

Today's top tip:

Don't use HTTPS resource records.

I set some alias-mode ones up pointing from one domain to another domain. WWW browsers were supposed, per RFC9460 §2.4.2, to talk to the target domain.

All of my WWW browsers that picked up the records continued to send their requests to the IP address for *original* domain, and simply acted as if HSTS was turned on for that domain.

The example scenario in RFC9460 does not actually do what is stated in practice.

#HTTPS #DomainNameSystem

2025-08-21

@ermo

There are a much smaller number of people doing SVCB lookups, too. But, interestingly, they are doing them wrongly.

And with a direct correlation to some other abuses.

Which does make me think that, in an ironic twist, it is the bad actors running robot vulnerability probes and scrapers that are the early adopters of SVCB, here.

#djbwares #DomainNameSystem #svcb

2025-08-21

@ermo

This does make you the second person in the world (if you picked up the source after I put it in yesterday) who can run

dnsqr https google.com

or even

dnsqr https jdebp.info

I didn't think that people were using this, it only having been accepted in November 2023, but I discovered a few lookups in my logs.

#djbwares #DomainNameSystem #https

2025-08-18

Today's #DomainNameSystem monstrosity:

The content DNS servers for vtb.com. respond with an 11KiB answer to an ANY query for vtb.com. This is the biggest amplification attack enabling domain in today's logs.

Coming in second are the content DNS servers for softcom.net., returning a 5KiB response to an ANY query.

2025-08-16

@simontatham

I know that you have had a lot of suggestions, over the years and recently. software. is a fair enough choice, and is definitely on-point.

On balance, I am glad that you resisted the lure of putty.party. .

(-:

#PuTTY #DomainNameSystem

2025-08-16

@mav @kajer @simontatham

One could also ask why this red flag is not seen for the exchange. "glamour domain". Or for io. and nu. for that matter.

In reality, this lopsided mental model has its roots in ICANN digging its heels in during the 1990s. The fact that ICANN dug its heels in back then is reflective of the idea that many people do not share the idea that ccTLDs and gTLDs are somehow more credible than anything else, especially as we've watched many of the antics played with them over the decades, giving lie to that notion.

Hell, we only have uk. itself because the United Kingdom academic community domain-squatted in 1985. (-:

#DomainNameSystem

2025-08-09

Have something to whet your appetites for #djbwares version 11.

If you don't know #djbdns, you probably won't notice what will make people who do know djbdns take interest. (-:

It's also going to contain the FreeBSD 13 build fixes that @ermo helped with.

#DomainNameSystem
#DomainNameSystem

A black on white terminal display showing a Z shell session.  The dnsns command is run on the domain "." and piped through the sort command, yielding a list of domain names.  The same command is run and the domain names given to the dnsip and printf commands, yielding a list of IP addresses.A black on white terminal display showing a Z shell session.  The dnsq command is run making an NS type DNS lookup on the domain "." to the l.root-servers.net. server.  What the server sent back, which is the normal ICANN list, is displayed.A black on white terminal display showing a Z shell session.  Two grep commands show lines from a data file.  Some dnsqr, dnsmx, and dnsip commands show the result of (indirectly) querying a DNS server publishing that data file.  And finally a tail command shows the last 6 lines of the DNS server's log where it has logged receiving the queries.  The IP addresses used are variants on the IPv4 and IPv6 loopback addresses.
2025-08-08

Looking up www.bing.com. nowadays involves dnscache looking up intermediate domain names in org., com., net., and info.; the cross-dependencies of which regularly exceed dnscache's nested gluelessness limit above which it switches to a slower resolution algorithm.

Some quick tests indicate that raising this limit from 2 to 3 improves matters.

So this will be in #djbwares 11.

#djbdns #dnscache #DomainNameSystem

2025-07-17

If I were @standupmaths , there would be some Terrible Python Code parsing the output of dnsqr and doing line fitting to the TTL values; and an entire video on how to estimate from such data how many real machines under the covers serve up some seemingly single service on the Internet, and a second channel one on how people did that from the Netcraft uptime graphs that it used to present a couple of decades ago.

And then a clever viewer switching from parsing text from a pipe to some proper Python DNS client library and achieving a 6283% speedup.

(-:

#DomainNameSystem #BendersTest #Python #TerriblePythonCode #StandUpMaths

2025-07-17

[…Continued]

#Quad9, #GooglePublicDNS, and my ISP all appeared to respect their capped TTLs; having cache misses when the TTLs reached zero. Unsurprisingly.

I know, both from prior experience and having seen the code, that the on-machine cache respects its TTLs in like manner.

Anyone expecting this (quite conventional) behaviour would be greatly misled by CloudFlare, however.

Quad9 and Google Public DNS were better than #CloudFlare, in retention time or amount of re-population needed to fill every cache behind the anycast; but they with their more aggressive TTL capping got nowhere near as long an interval between cache misses that the on-machine cache has.

CloudFlare, however, in fact incurred cache misses multiple times per hour, at one point fetching anew on *all* of its caches after a mere 10 minute gap when the test was halted. The TTLs never even managed to count down to 41 days before there was a (sometimes global!) cache miss.

#DomainNameSystem #BendersTest

2025-07-17

[…Continued]

The pattern is not ideal, because the anycasting is of course determined by moment-to-moment circumstances; but the multiple descending series of TTL values revealed that:

My ISP had at least 3 caches behind 2 apparent IP addresses.

#CloudFlare and #GooglePublicDNS had at least 8 caches behind 2 apparent IP addresses.

#Quad9 had at least 2 caches behind 2 apparent IP addresses, but it was not as simple as 1 cache per IP address. Sometimes they swapped, or gave identical results.

[Continued…] #DomainNameSystem #BendersTest

2025-07-17

[…Continued]

Everyone properly counted down the TTLs.

Only the on-machine cache counted down monotonically as expected, however. The others had TTLs that counted down in the long term but jumped up and down in the short term.

There was a discernable pattern, thanks to the 10 second loop interval in my test. There were multiple series of descending TTLs, swapping in and out.

This pattern revealed that there are multiple caches behind anycast, even at my ISP; those caches not sharing data. They each get separately populated during the first few test loop iterations and re-populated.

[Continued…] #DomainNameSystem #CloudFlare #Quad9 #GooglePublicDNS #BendersTest

2025-07-17

[…Continued]

The on-machine cache capped the 42 day TTL down to 1 week, as documented.

There was no pressure to evict the resource record set, even though the machine was not dedicated to just the test and other use was being made of the on-machine cache. There was no cache miss at all after the first one.

My ISP's proxy DNS servers also capped the TTL down to 1 week, interestingly.

Only #CloudFlare passed through the original 42 day TTL. The high TTLs might lead one to conclude that CloudFlare thus cached the longest and best. In reality it cached the shortest and worst, more on which in a moment.

#GooglePublicDNS and #Quad9 capped the 42 day TTL the most aggressively, the former reducing to a couple of days, the latter to a mere 12 hours. They turned out to do better than CloudFlare, however.

[Continued…] #DomainNameSystem #BendersTest

2025-07-17

[…Continued]

The latency of the on-machine server, the total transaction time, was always in single milliseconds after the single very first cache miss query.

The actual latencies of all of the #Quad9, #GooglePublicDNS, and #CloudFlare public proxy DNS servers were in tens of milliseconds for cache hits.

My ISP's proxy DNS servers are 6 hops away, and also had an actual latency in the tens of milliseconds, but slightly shorter than those of the third-party ones. None of the third-party ones are in fact closer than 7 hops away.

The latency to the relevant content DNS server was in the hundreds of milliseconds, and the latencies of the third-party proxy DNS servers when they had cache misses were between this and twice this.

[Continued…] #DomainNameSystem #BendersTest

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst