> The current preview implementation supports HTTP-01 challenges to verify the client’s domain ownership.
DNS-01 is probably the most impactful for users of nginx that isn't public facing (i.e., via Nginx Proxy Manager). I really want to see DNS-01 land! I've always felt that it's also one of the cleanest because it's just updating some records and doesn't need to be directly tethered to what you're hosting.
A practical problem with DNS-01 is that every DNS provider has a different API for creating the required TXT record. Certbot has more than a dozen plugins for different providers, and the list is growing. It shouldn't be nginx's job to keep track of all these third-party APIs.
It would also be unreasonable to tell everyone to move their domains to a handful of giants like AWS and Cloudflare who already control so much of the internet, just so they could get certificates with DNS-01. I like my DNS a bit more decentralized than that.
That is true and it is annoying. They should really just support RFC 2136 instead of building their own APIs. Lego also supports this and pretty much all DNS servers have it implemented. At least I can use it with my own DNS server...
I wonder what a good solution to this would be? In theory, Nginx could call another application that handles the communication with the DNS provider, so that the user can tailor it to their needs. (The user could write it in Python or Go or whatever.) Not sure how robust that would be though.
You can make the NS record for the _acme-challenge.domain.tld point to another server which is under your control, that way you don't have to update the zone through your DNS hoster. That server then only needs to be able to resolve the challenges for those who query.
1. Your main domain is important.example.com with provider A. No DNS API token for security.
2. Your throwaway domain in a dedicated account with DNS API is example.net with provider B and a DNS API token in your ACME client
3. You create
_acme-challenge.important.example.com not as TXT via API but permanent as CNAME to
_acme-challenge.example.net or
_acme-challenge.important.example.com.example.net
4. Your ACME client writes the challenge responses for important.example.com into a TXT at the unimportant _acme-challenge.example.net and has only API access to provider B. If this gets hacked and example.net lost you change the CNAMES and use a new domain whatever.tld as CNAME target.
This has blown my mind. Its been a constant source of frustration since Cloudflare stubbornly refuses to allow non-enterprise accounts to have a seperate key per zone. The thread requesting it is a masterclass in passive aggressiveness:
Could you elaborate on the separate key per zone issue? It's possible to create different API keys which have only access to a specific zone, and I'm a non-enterprise user.
I used the acme-dns server (https://github.com/joohoi/acme-dns) for this. It's basically a mini DNS server with a very basic API backed with sqlite. All of my acme.sh instances talk to it to publish TXT records, and accepts queries from the internet for those TXT records.
There's a NS record so *.acme-dns.example.com delegates requests to it, so each of my hosts that need a cert have a public CNAME like _acme-challenge.www.example.com CNAME asdfasf.acme-dns.example.com which points back to the acme-dns server.
When setting up a new hostname/certificate, a REST request is sent to acme-dns to register a new username/password/subdomain which is fed to acme.sh. Then every time acme.sh needs to issue/renew the certificate it sends the TXT info to the internal acme-dns server, which in turn makes it available to the world.
You can cname _acme-challenge.foo.com to foo.bar.com.
Now, if when you do the DNS challenge, you make a TXT at foo.bar.com with the challenge response, through CNAME redirection, the TXT record is picked up as if it were directly at _acme-challenge.foo.com. You can now issue wildcard certs for anything for foo.com.
I have it on my backlog to build an automated solution to this later this year to handle this for hundreds of individual domains and then put the resulting certificates in AWS secrets manager.
I'm going to also see if I can make some sort of ACME proxy, so internal clients authenticate to me, but they cant control dns, so I make the requests on their behalf. We need to get prepared for ACME everywhere. In May 2026, its 200 day certs, it only goes down from there.
In my case I have a very small nameserver at ns.example.com. So I set the NS record for _acme-challenge.example.com to ns.example.com.
An A-record lookup for ns.example.com resolves to the IP of my server.
This server listens on port 53. It is a custom, small Python server using `dnslib`, which also listens on port let's say 8053 for incoming HTTPS connections.
In certbot I have a custom handler, which, when it is passed the challenge for the domain verification, sends the challenge information via HTTPS to ns.example.com:8053/certbot/cache. The small DNS-server then stores it and waits for a DNS query on port 53 for that challenge to come in, and if it does, it serves it that challenge's TXT record.
elif qtype == 'TXT':
if qname.lower().startswith('_acme-challenge.'):
domain = qname[len('_acme-challenge.'):].strip('.').lower()
if domain in storage['domains']:
for verification_code in storage['domains'][domain.lower()]:
a.add_answer(*dnslib.RR.fromZone(qname + " 30 IN TXT " + verification_code))
The certbot hook looks like this
#!/usr/bin/env python3
import ...
r = requests.get('https://ns.example.com:8053/certbot/cache?domain='+urllib.parse.quote(os.environ['CERTBOT_DOMAIN'])+'&validation-code='+urllib.parse.quote(os.environ['CERTBOT_VALIDATION']))
That one nameserver-instance and hook can be used for any domain and certificate, so it is not just limited to the example.com-domain, but can also deal with challenges for let's say a *.testing.other-example.com wildcard certificate.
And since it already is a nameserver, it might as well serve the A records for dev1.testing.other-example.com, if you've set the NS record for testing.other-example.com to ns.example.com.
It's time for DNS providers to start supporting TSIG + key management. This is a standardized way to manipulate DNS records, and has a very granular ACL.
General note: your DNS provider can be different from your registrar, even though most registrars are also providers, and you can be your own DNS provider. The registrar is who gets the domain name under your control, and the provider is who hosts the nameserver with your DNS records on it.
no you don't, you can just run https://github.com/joohoi/acme-dns anywhere, and then CNAME _acme_challenge.realdomain.com to aklsfdsdl239072109387219038712.acme-dns.anywhere.com. then your ACME client just talks to the ACME DNS api, which let's it do nothing at all aside from deal with challenges for that one long random domain.
You can do it with an NS record, ie _acme_challenge.realdomain.com pointing to the DNS server that you can program to serve the challenge response. No need to make a CNAME and involve an additional domain in the middle.
I've been hoping to get ACME challenge delegation on traefik working for years already. The documentation says it supports it, but it simply fails every time.
If you have any idea how this tool would work on a docker swarm cluster, I'm all ears.
Because users would pick an alternative solution that meets their needs when they don't have leverage or ability to change DNS provider. Have to meet users where they are when they have options.
This concerned me greatly so I use AWS Route53 for DNS and use an IAM policy that only allows the key to work from specific IP addresses and limit it to only create and delete TXT records for a specific record set. I love when I can create exactly the permissions I want.
AWS IAM can be a huge pain but it can also solve a lot of problems.
It's a bit of a pain in the ass, but you can actually just publish the DNS records yourself. It's clear they are on the way out though as I believe it's only a 30 day valid certificate or something.
I use this for my Jellyfin server at home so that anyone can just type in blah.foo regardless of if their device supports anything like mDNS, as half the devices claim to support it but do not correctly.
Is having one key per zone worth paying money for? It's on the list of features I'd like to implement for PTRDNS because it makes sense for my own use case, but I don't know if there's enough interest to make it jump to the top of this list.
Hurricane Electric support a hidden primary as part of their free DNS nameserver service (do you actually want to expose your primary when someone else can handle the traffic?)
Yup, but it's a bit of a dance for bootstrapping, since they require you to already have delegated to them, but some TLDs require all NSes to be in sync and answer for the domain before delegating…
The problem with DNS-01 is that you can only use one delegation a time. I mean, if you configure a wildcard cert with _acme-challenge.example.com in Google, you couldn't use it in Cloudflare, because it uses a single DNS authorization label (subdomain).
One of Traefik's shortcomings with ACME is that you can only use one api key per DNS provider. This is problematic if you want to restrict api keys to a domain, or use domains belonging to two different accounts. I hope Nginx will not have the same constraint.
Caddy is just for developers that want to publish/test the thing they write. For power users or infra admins, nginx is still much more valuable.
And yes, I use Caddy in my home lab and it's nice and all but it's not really flexible as nginx is.
In case people are wondering, this is the author of Caddy.
He’s curious where it’s being used outside of home labs and in small shops. Matt, it’s fantastic software and will only get better as go improves.
I used it in a proxy setup for ingress to kubernetes that’s overlayed across multiple clouds - for the government (prior admin, this admin killed it). I can’t tell you more information than that. Other than it goes WWW -> ALB -> Caddy Cluster * Other Cloud -> K8s Router -> K8s pod -> Fiber Golang service. :chefs kiss:
When a pod is registered to the K8s router, we fire off a request to the caddy cluster to register the route. Bam, we got traffic, we got TLS, we got magic. No downtime.
I almost forgot. Matt. We added a little sugar to Caddy for our cluster. Hashicorp's memberlist. So we can sync the records. It worked great. Sadly, I can't share it but it's rather trivial to implement.
Sure. University / Government sector. I know quite some unis/projects in that field that switched to caddy, since gigantic ip ranges and deep subdomains with stakeholders of many different classes have certain PKI requirements and caddy makes using ACME easy. We deploy a self serving tool where people can generate EAB-Ids and Hmac keys for a sub domain they own.
Complex root domain routing and complex dynamic rewrite logic remains behind Apache/NginX/HaProxy, a lot of apps are then served in a container architecture with Caddy for easy cert renewal without relying on hacky certbot architectures. So we don't really serve that much traffic with just one instance. Also, a lot of our traffic is bots. More than one would think.
The basic configuration being tiny makes it the perfect fit for people with varying capabilities and know how when it comes to devops. As a devops engineer, I enjoy the easy integration with tailscale.
Not sure if you‘ll read this 7 days after the fact, but an easier/caddy native way to deal with bots, in the sense of caddy-defender or Anubis would be a godsend.
A tools value is in the eye of the beholder. Nginx has ceased being valuable to me when they decided to change licenses, go private equity, not adapt to orchestration needs, ignore http standards, and not release meaningful updates in a decade.
Only if they'd get the K8s ingress out of the WIP phase; I can't wait to possibly get rid of the cert-manager and ingress shenanigans you get with others.
Yup. I can’t wait for the day I can kill my caddy8s service.
The best thing about caddy is the fact you can reload config, add sites, routes, without ever having to shutdown. Writing a service to keep your orchestration platform and your ingress in sync is meh. K8s has the events, DNS service has the src mesh records, you just need a way to tell caddy to send it to your backend.
The feature should be done soon but they need to ensure it works across K8s flavors.
I don't even know why anyone wouldn't use the DNS challenge unless they had no other option. I've found it to be annoying and brittle, maybe less so now with native web server support. And you can't get wildcards.
Spivak is saying that the DNS method is superior (i.e you are agreeing - and I do too).
One reason I can think of for HTTP-01 / TLS-ALPN-01 is on-demand issuance, issuing the certificate when you get the request. Which might seem insane (and kinda is), but can be useful for e.g crazy web-migration projects. If you have an enormous, deeply levelled, domain sprawl that are almost never used but you need it up for some reason it can be quite handy.
One problem with wildcards is that any service with *.foo.com can pretend to be any other service. This is an issue if you're using mutual TLS authentication and want to trust the server's certificate.
The advantage to HTTP validation is that it's simple. No messing with DNS or API keys. Just fire up your server software and tell it what your hostname is and everything else happens in the background automagically.
And this is different from DNS how exactly? The key and resulting cert still needs to be distributed among your servers no matter which method is used.
With dns-01, multiple servers could, independently of each other, fetch a certificate for the same set of hostnames. Not sure if it’s a good idea though.
I guess it depends on the CA, but some do. Let’s Encrypt does, for example. I guess it’s useful for HA deployments, where load balancers might be spread out across multiple datacenters and stuff like that.
Not really, just forward .well-known/acme-challenge/* requests to a single server or otherwise make sure that the challenge responses are served from all instances.
Just like you can point .well-known/acme-challenge/ to a writable directory you can also delegate the relevant DNS keys to a name server that you can more easily update.
I am using a bash script on my vps to get a wildcard certificate and just scp the cert to my other reverse proxies. Some using nginx but some Caddy or traefik
Does DNS-01 support DNS-over-HTTPS to the registered domain name servers? If so, then it should be extremely simple to extend nginx to support DNS claims; if not, perhaps DNS-01 needs improvements.
When placing the order, you get a funny text string from the ACME provider. You need to create a TXT record that holds this value. How you create the TXT record is up to you and your DNS server – the ACME provider doesn’t care.
I don’t believe DNS-over-HTTPS is relevant in this context. AFAIK, it’s used by clients who want to query a DNS server, and not for an operator who wants to create a DNS record. (Please correct me if I’m wrong.)
The ACME provider makes a query to the DNS server to validate the record exists and contains the right "funny string". Parent's question was whether that query is/can be made via DoH.
You want to build a DNS server into nginx so you can respond to DoH query's for the domain you are hosting on that nginx server?
Let's ignore that DoH is a client oriented protocol and there's no same way to only run a DoH server without an underlying DNS server. How do you plan to get the first certificate so the query to the DoH server doesn't get rejected for invalid certificate?
At that point you might as well use the HTTP-01 challenge. I think the whole utility of DNS-01 is that you can use it if you don't want to expose the HTTP server to the internet.
- wildcard certs. DNS-01 is a strict requirement here.
- certs for a service whose TLS is terminated by multiple servers (e.g. load balancers). DNS-01 is a practical requirement here because only one of the terminating servers would be able to respond during an HTTP or ALPN challenge.
But then you have to redistribute the cert from that single server to all the others. Which, yes, can be done. But then you've gotta write that glue yourself. What's more, you've now chosen a special snowflake server on whom renewals depend.
In other words, no, it's not just as easy as setting up DNS-01. Different operational characteristics, and a need for bespoke glue code.
> But then you have to redistribute the cert from that single server to all the others.
Wouldn't you have to do that anyway? Or is the idea that each server requests and renews a separate cert for itself? That sounds as if you'd have to watch out for multiple servers stepping on each other's toes during the DNS-01 challenge, if there is ever a situation where two or more servers want to renew their cert at the same time.
Afaiu, that's only a problem for trying to _delegate_ to multiple clients. But routine operation with multiple clients works just fine in my experience (doing multi-region load balancing). Multiple TXT records are created, I think (speaking off the top of my head).
I wanted to quickly double-check my (albeit limited) experience against docs. The RFC[0] implies the possibility of what I described (provided a well-behaved ACME client that doesn't clobber other TXT records):
2. Query for TXT records for the validation domain name
3. Verify that the contents of one of the TXT records match the
digest value
And then the certbot docs[2] show how it's a well-behaved client that wouldn't clobber TXT records from concurrent instances:
> You can have multiple TXT records in place for the same name. For instance, this might happen if you are validating a challenge for a wildcard and a non-wildcard certificate at the same time. However, you should make sure to clean up old TXT records, because if the response size gets too big Let’s Encrypt will start rejecting it.
> ...
> It works well even if you have multiple web servers.
That bit about "multiple webservers" is a little ambiguous, but I think the preceding line indicates clearly enough how everything is supposed to work.
Why would nginx ever need support for the DNS-01 challenge type? It always has access to `.well-known` because nginx is running an HTTP server for the entire lifecycle of the process, so you'd never need to use a lower level way of doing DV. And that seems to violate the principle of least privilege, since you now need a sensitive API token on the server.
Because while Nginx always has access to .well-known, thing that validates on issuer side might not. I use DNS challenge to issue certificates for domains that resolve to IPs in my overlay network.
The issue is that supporting dns-01 is just supporting dns-01 it's providing a common interface to interact with different providers that implement dns-01.
dns-01 is just a challenge; which api or dns update system should nginx support then? Some API, AFXR, or UPDATE?
I think this is kinda the OPs point, nginx an http server, why should it be messing with dns? There are plenty of other acme clients to do this with ease
I mean, you just repeated my explanation why supporting dns-01 in nginx isn't straightforward has http-01. I've explained why dns-01 challenge is still useful and might be required for some users.
> I took as supporting the adding the dns implementation
Well, I am supporting it, but I pointed why it's not as straightforward as supporting http-01.
> I don't think that it makes sense for nginx
It makes sense for nginx because ultimately I don't make certificates just for the fun of it, I do it to give it to some HTTP server. So it makes sense.
However, this isn't a future that will be not used by paid users, and F5 seems to be opposing making OSS version users lives better.
Issuing a new certificate with the HTTP challenge pretty much requires you allow for 15 minutes of downtime. It's really not suitable for any customer-facing endpoint with SLAs.
Only if you let certbot take down your normal nginx and occupy port 80 in standalone mode. Which it doesn't need to, if normal nginx can do the job by itself.
When I need to use the HTTP challenge, I always configure the web server in advance to serve /.well-known/ from a certain directory and point certbot at it with `certbot certonly --webroot-path`. No need to take down the normal web server. Graceful reload. Zero downtime. Works with any web server.
Sounds like you’re doing it wrong. I don’t know about this native support, but I’d be very surprised if it was worse than the old way, which could just have Certbot put files in a path NGINX was already serving (webroot method), and then when new certificates are done send a signal for NGINX to reload its config. There should never be any downtime.
Certbot has a "standalone" mode that occupies port 80 and serves /.well-known/ by itself.
Whoever first recommended using that mode in anything other than some sort of emergency situation needs to be given a firm kick in the butt.
Certbot also has a mode that mangles your apache or nginx config files in an attempt to wire up certificates to your virtual hosts. Whoever wrote the nginx integration also needs a butt kick, it's terrible. I've helped a number of people fix their broken servers after certbot mangled their config files. Just because you're on a crusade to encrypt the web doesn't give you a right to mess with other programs' config files, that's not how Unix works!
Also, whoever decided that service providers were no longer autonomous to determine the expiration times of their own infrastructure's certificates should get that boot-to-the-head as well.
It is not as if they couldn't already choose (to buy) such short lifetimes already.
Certbot also fights automation and provisioning with e.g. Andible by modifying config files to remember command line options if you ever need to do anything manually in an emergency.
It is a terrible piece of software. I use dehydrated which I'd much friendlier to automation.
Those choices and Certbot strongly encouraging snap installation was enough to get me to switch to https://go-acme.github.io/lego/, which I've been very happy with since. It's very stable and feels like it was built by people who actually operate servers.
DNS-01 is probably the most impactful for users of nginx that isn't public facing (i.e., via Nginx Proxy Manager). I really want to see DNS-01 land! I've always felt that it's also one of the cleanest because it's just updating some records and doesn't need to be directly tethered to what you're hosting.