I hate how correct you are.
Working at a company with only two engineers and few sales and marketing people the amount of "hey i made that feature with claude when can we ship it for the customer? I showed them and they really like it" only to look at the code and find out that it doesn't adhere any of our standards and is not of a good quality either. But if you tell that then it's "yea but everyone is ai shipping now and we cannot be the ones not doing it as we will lose customers..." yea but now we are losing maintainability, understanding of our codebase and make ourself dependant on LLM providers who are getting more expensive every week.
Probably the most common reason to use DNSSEC is to check a box on a list of compliance rules. And I don't think this will change anything for people who need DNSSEC for compliance.
There's no commercial compliance regime that requires DNSSEC (FedRAMP might be the only exception --- I'm uncertain about the current state of FedRAMP DNSSEC rules --- but that makes sense given that DNSSEC is a giant key escrow scheme.)
FedRAMP requires it, although like many requirements, you may be able to get out of it if you have a good reason and/or your sponsoring agency doesn't care about it.
There are also some large businesses that require, or strongly pressure SaaS providers to use DNSSEC. You can often contest that, but if you have DNSSEC, that's one less thing to argue about in the contract.
Which businesses are those? (I ask because if they're North American, I have a pretty good sense of which large North American businesses even have DNSSEC signatures set up, and it's not many; small enough that you can easily memorize the list.)
MTA-STS was standardized explicitly to support the (nearly universal) use case of mail providers without DNSSEC. Even O365, which ostensibly supports DANE/DNSSEC for email security, does so only for select customers and not for ordinary ones (go look for the TLSAs).
neverssl.com works fine for me, only a small warning in the place where the padlock usually is, that no-one checks anyway.
The browser would be very unhappy with an <input type="password"/> on a non-TLS site (localhost excepted). HSTS would trigger the "massive" warning and refuse to load the site, however.
I just checked it. You mean the very small open padlock icon? The era of browsers warning loudly about HTTP was a decade ago, it got reversed due to pushback.
Well I checked both Chrome and Firefox on mobile and my desktop and they were all much more obvious than just an "open padlock". They both said "Not Secure" and in Firefox it was bright red text. Also in incognito mode Chrome refused to even open the site without a full screen warning. They all make it super clear non-HTTPS sites are not secure so I'm not really sure what your point is?
I doubt it. The root cause of this was a root server misconfiguration or bug. It happened to DNSSEC records this time, which is a pain, but next time it might as well flip bits or point to wrong IP addresses instead.
Paradoxically, resolvers wouldn't have noticed the misconfiguration if it weren't for DNSSEC.
It's a pretty hard argument to work around: WebPKI certificates should go in the DNS, and also the largest DNS providers might at any moment decide not to validate DNSSEC anymore to get through an outage.
Yes, it's a crappy outcome, but endpoints can still choose to enforce this. Further, it's not a persuasive argument against more DNSSEC usage, since if there was more DNSSEC usage then resolvers would be more reluctant to disable it.
If there's going to be a single point of failure in front of your website, that single point of failure may as well be the only single point of failure instead of having two single points of failure, and it's probably important that people can't spoof responses.
Nobody had to hack it. A system at DENIC broke, and so Cloudflare turned off DNSSEC validation for all of their users accessing .de. If DNSSEC was actually important for the security model of those users, that would be a huge deal.
If DNSSEC is part of your security model, you want local validation. Not relying on third party resolver that you don't have a contract with.
Beyond that, DNS has the AD bit. If you need DNSSEC secure data (for example for the TLSA record), then when Cloudflare turns off DNSSEC validation, the AD bit will be clear and things will stop working.
Am I the only one who thinks that the AD bit is about as useful as the RFC 3514 evil bit?
We have this elaborate, complex, and extremely fragile cryptographic system behind DNSSEC and we distill it down to one single bit that we carry over unauthenticated links. Why?
At least WebPKI answers the right question: should I trust a particular claim to represent host.domain at the time in the following range? (Of course it defers determining the current time to some unspecified other mechanism.) DNSSEC tries to do everything and cannot survive an upstream error even within the downstream validity window. And yet, despite the fact that most of the spec leans heavily toward failing secure, the actual communication of validation status is entirely unprotected.
I can answer that! Because when DNSSEC was designed, it was believed that serverside compute could not keep up with per-request cryptography. DNSSEC contorts itself in several ways to maintain affordances for offline cryptography, which has been retconned into a security mechanism but was in reality just a bunch of non-cryptography-engineers making a terrible prediction about the feasability of cryptography.
(Source: I'm one of the few weirdos on Earth who has read the mailing lists all the way back to when DNSSEC was a TIS project).
The intention is clearly that the client is a minimal implementation that will only forward a request to a resolver it trusts. The fact that Cloudflare and Google have convinced us all to use Cloudflare's and Google's resolvers is the problem.
DNSSEC and WebPKI both rely on chains of trust. If the problem was that .de's keys expired, you'd have the same problem when Let's Encrypt's keys expired.
> If the problem was that .de's keys expired, you'd have the same problem when Let's Encrypt's keys expired.
Even this incident proves that’s not the case.
If LetsEncrypt has a temporary availability issue, my users don’t notice unless it spans longer than my need to renew a cert.
If LetsEncrypt has a CA cert expire, I can get a cert from another provider.
If DENIC’s DNSSEC records break, either due to an operational error or an expiry issue, my .de site becomes inaccessible and my users see a DNS lookup failure. My only option is to hope resolvers do what Cloudflare did, or move my site to a new TLD and just pray that TLD never has the same problem.
The WebPKI works end-to-end all the way to use devices; DNSSEC build an explicit client/server trust model into that. The former is obviously superior to the latter.
Yes, it's also quite damaging to DNSSEC's trust model that the world has transitioned to centralized resolver caches. But the fundamental problem we're talking about with the AD bit wouldn't vanish if 8.8.8.8 and 1.1.1.1 did too; instead, users would be even more reliant on ISP nameservers, which are literally the least trustworthy pieces of infrastructure on the entire Internet.
It is indeed a bit sad that Cloudflare had to turn off DNSSEC completely. But I completely understand that they don't have a production-ready, tested path to override DNSSEC validation for only some domains.
The issue has been identified as a DNSSEC signing problem at DENIC, the organization responsible for the .DE top-level domain. Cloudflare has temporarily disabled DNSSEC validation on 1.1.1.1 resolver in order to allow .DE names to continue to resolve. DNSSEC validation will be re-enabled when the signing problems at DENIC are known to have been resolved.
---
(and in case it changes again, now it says)
---
The issue has been identified as a DNSSEC signing problem at DENIC, the organization responsible for the .DE top-level domain. Cloudflare has temporarily disabled DNSSEC validation for .de domains on 1.1.1.1 resolver (as per RFC 7646) in order to allow .DE names to continue to resolve. DNSSEC validation will be re-enabled when the signing problems at DENIC are known to have been resolved.
It didn't originally say that. They added the clarification just a few minutes ago. The guidelines ask you not to ask people these kinds of questions, for what it's worth.
That comparison really makes the contrast clear: losing TLS would’ve put millions of people either into full downtime or immediately at significant risk (you can’t uncapture data). Losing DNSSEC, however, placed no one at risk and improved uptime.
There’s a reason why one of the two has roughly 10% adoption after three decades and the other is high 90-something percent.
I have never used DNSSEC and never really bothered implementing it, but do I understand it correctly that we took the decentralized platform DNS was and added a single-point-of-failure certificate layer on top of it which now breaks because the central organisation managing this certificate has an outage taking basically all domains with them?
> which now breaks because the central organisation managing this certificate has an outage
The ".de" TLD is inherently managed by a single organization, and things wouldn't be much better if its nameservers went down. Some of the records would be cached by downstream resolvers, but not all of them, and not for very long.
> we took the decentralized platform DNS was and added a single-point-of-failure certificate layer on top of it
DNSSEC actually makes DNS more decentralized: without DNSSEC, the only way to guarantee a trustworthy response is to directly ask the authoritative nameservers. But with DNSSEC, you can query third-party caching resolvers and still be able to trust the response because only a legitimate answer will have a valid signature.
Similarly, without DNSSEC, a domain owner needs to absolutely trust its authoritative nameservers, since they can trivially forge trusted results. But with DNSSEC, you don't need to trust your authoritative nameservers nearly as much [0], meaning that you can safely host some of them with third-parties.
> DNSSEC actually makes DNS more decentralized: without DNSSEC, the only way to guarantee a trustworthy response is to directly ask the authoritative nameservers. But with DNSSEC, you can query third-party caching resolvers and still be able to trust the response because only a legitimate answer will have a valid signature.
but how would one verify the signature if the DNSKEY expired and you cannot fetch a fresh one because the organisation providing those keys is down? As far as I understood the TTL for those keys is different and for DENIC it seems to be 1h [0]. So if they are down for more than an hour and all RRSIG caches expire, DNS zones which have a higher TTL than 1h but use DNSSEC would also be down?
[0]
dig RRSIG de. @8.8.8.8
de. 3600 IN RRSIG DNSKEY 8 1 3600 20260519214514 20260505201514 26755 de. [...]
> but how would one verify the signature if the DNSKEY expired and you cannot fetch a fresh one because the organisation providing those keys is down?
In theory, this shouldn't happen, because if you use the same TTLs for your DNSSEC records and your "regular" records, then if the regular records are present in the cache, the DNSSEC records will be too.
> So if they are down for more than an hour and all RRSIG caches expire, DNS zones which have a higher TTL than 1h but use DNSSEC would also be down?
Yes, but I'd argue that the DNSSEC records should have the same TTLs for exactly this reason. That's how my domain is set up at least:
$ dig +nocmd +nocomments +nostats +dnssec @any.ca-servers.ca. maxchernoff.ca. DS
;maxchernoff.ca. IN DS
maxchernoff.ca. 86400 IN DS 62673 15 2 487B95FEFF04265826F037C9DB2E1F14FF9ADBF2C7BE246A2B9F9BFD 481BE928
maxchernoff.ca. 86400 IN RRSIG DS 13 2 86400 20260512131336 20260505104433 46762 ca. ppc9LrWniPWdAI2Xq1g3FrYJGQVYayA5TtgFRkJfqOqNfe6zu/n0gwti IO3c9pOoUpIum5gPB6GLOGbGU+sfhg==
$ dig +nocmd +nocomments +nostats +dnssec @ns.maxchernoff.ca. maxchernoff.ca. DNSKEY
;maxchernoff.ca. IN DNSKEY
maxchernoff.ca. 86400 IN DNSKEY 257 3 15 DYs9mPDMRx/hQ9R9iGLi1Ysx1eFdhlXeCujY6PqJWeU=
maxchernoff.ca. 86400 IN RRSIG DNSKEY 15 2 86400 20260518072823 20260504055823 62673 maxchernoff.ca. RgPyEvB/kjXIvoidRNF/hfm7utzDs0kxXn4qJL17TUAVYOdbLl0Vd8zt E52bGBBFv2TNEnf9O9LkiT2GBH0jAA==
$ dig +nocmd +nocomments +nostats +dnssec @ns.maxchernoff.ca. maxchernoff.ca. A
;maxchernoff.ca. IN A
maxchernoff.ca. 86400 IN A 152.53.36.213
maxchernoff.ca. 86400 IN RRSIG A 15 2 86400 20260518072823 20260504055823 62673 maxchernoff.ca. bRfTVHnMjCFRaIh5uc0aT1vD4yh1UZrqOZDRunLbxFI1eth6nNlTiOOC xti7axVoXwB6VAoHOAnW0nL0eeJNDQ==
Thanks for explaining. I thought that once any key in the chain-of-trust of any domains DNSSEC expired the whole record went stale but turns out that was a wrong assumption. If the DNSKEY and the other records have the same TTL and the DNSSEC verification is also "cached" then that makes a lot more sense.
> I thought that once any key in the chain-of-trust of any domains DNSSEC expired the whole record went stale but turns out that was a wrong assumption.
No, that actually is true, but I think (?) that the part that you were missing is that DNSSEC records are mostly the same as any other record, so they can be cached the same way. And since most resolvers are DNSSEC-enabled these days, they'll tend to request (and therefore cache) the DNSSEC records at the same time as the regular records.
There are tons of edge cases here, but it should hopefully be pretty rare for a cache to have a current A/AAAA record and stale/missing DNSSEC records.
> the DNSSEC verification is also "cached"
Technically the verification itself isn't cached, but since verification only depends on the chain of DNSSEC records, and those records are cached, it has the same effect.
DNSSEC doesn't change the degree to which DNS is decentralized. It's always been hierarchical. In the absence of caching, every DNS query starts with a request to the root DNS servers. For foo.com or foo.de, you first need to query the root servers to determine the nameservers responsible for .com and .de. Then you contact the .com or .de servers to ask for the foo.com and foo.de nameservers. All DNSSEC does is add signatures to these responses, and adds public keys so you can authenticate responses the next level down.
A list of root nameserver IP addresses is included with every local recursive DNS resolver. The list changes, albeit slowly, over the years. With DNSSEC, this list also includes public keys of those root servers, which also rotate, slowly.
What you see here is decentralisation working. The issue is with the operator of the de TLD, and as such only that TLD is affected.
DNS is not decentralised in such a way, that multiple organisations run the infrastructure of a TLD, those are always run by a single entity.(.com and .net are operated by Verisign)
So what the issue is, that the operator has, does not change the impact.
Resolvers are free to cache each TLD's keys. There's a finite, well-known list of TLDs and their keys - you can download all the root zone data from IANA: https://www.iana.org/domains/root/files (it's a few megabytes in uncompressed text form)
The world might be a little bit better with more decentralization of the root zone.
The idea of having local LLMs accessible in the browser for privacy concerning is nice i guess but when each browser has a different model attached to this API testing becomes even more a nightmare then now. I wonder if this will drive more users towards chrome because most of the usages of this API might be just tailored to fit the Gemini Nano model?
@tom1337 The testing fragmentation is the real problem here. Prompts are not model-agnostic in practice - a carefully tuned prompt for Gemini Nano 3 v2025 will silently degrade on whatever Gecko ships, and the API gives you no capability introspection to branch on. This is actually worse than the WebGL situation, where at least you could query extension support. Shipping a feature that depends on prompt quality against an unnamed, versioned-behind-the-browser model is closer to shipping a feature that depends on the user's installed dictionary.
That's at least genuine to some degree. Like, ok, good to know it's not officially a step back... But stuff like "smallest notch ever in an iPhone" is outright misleading consumers when there are other brands out there that easily beat them.
If you are on github/gitlab, renovate bot is a good option for automating dependency updates via PRs while still maintaining pinned versions in your source.
doesn't seem that scheduled to me
reply