This is a desired outcome. The WebPKI ecosystem would really like it if everyone stopped depending on them for internal things because it's actually a pretty different set of requirements. Long-lived certs with an internal CA makes a lot of sense and is often more secure than using a public CA.

Our internally provided certs of various CAs have a TTL of 72 hours and should be renewed every 48 hours.

It's been a huge pain as we have encountered a ton of bugs and missing features in libraries and applications to reload certs like this. And we have some really ugly workarounds in place, because some applications place a "reload a consul client" on the same level of "reload all config, including opening new sockets, adjusting socket parameters, doing TCP connection handover" - all to rebuild a stateless client throwing a few parameters at a standard http client. But oh well.

But I refuse to back down. Reload your certs and your secrets. If we encounter a situation in which we have to mass-revoke and mass-reissue internal certs, it'll be easy for those who do. I don't have time for everyone else.

> Our internally provided certs of various CAs have a TTL of 72 hours and should be renewed every 48 hours.

Do you promise to come back and tell us the story about when someone went on vacation and the certs issued on a Thursday didn't renew over the weekend and come Monday everything broke and no one could authenticate or get into the building?

At least that sounds like it would be a more interesting story than the one where the person who quit a year ago didn't document all the places they manually installed the 2-year certificate.

I will. We've been betting Postgres connectivity for a few hundred applications on this over the past three years. If this fucks up, it'll be known without me.

I'm curious what requirement drove you to such arbitrarily small TTL, other than "because we can" dick-measuring geekery.

I applaud you for sticking to your guns though.

At the end of the day, we were worried about exactly these issues - if an application has to reload certs once every 2 years, it will always end up a mess.

And the conventional wisdom for application management and deployments is - if it's painful, do it more. Like this, applications in the container infrastructure are forced to get certificate deployment and reloading right on day 1.

And yes, some older application that were migrated to the infrastructure went ahead and loaded their credentials and certificates for other dependencies into their database or something like that and then ended up confused when this didn't work at all. Now it's fixed.

Why would the cert renewal be manual?

That's how it used to be done. Buy a certificate with a 2 year expiry and manually install it on your server (you only had 1; it was fine).

I can tell you that there are still quite a few of us out here that are doing the once a year manual renewal. I have suggested a plan to use Let's Encrypt with automated renewal, but for some companies, they are using old technology and/or old processes that "seniors" are comfortable with since they understand them and suggesting a better process isn't always looked favorably upon (especially if your job relies on the manual renewal process as one of those cryptic things only IT can do).

Some of this rhymes with Colm MacCárthaigh's case against mTLS.

https://news.ycombinator.com/item?id=25380301

This has been our issue too. We've had mandates for rotating OAuth secrets (client ID & client secret).

Except there are no APIs to rotate those. The infrastructure doesn't exist yet.

And refreshing those automatically does not validate ownership, unlike certificates where you can do a DNS check or an HTTP check.

Microsoft has some technology that next to these tokens they also have a per-machine certificate that is used to sign requests, and those certificates can't leave the machine.

We've also felt the pain for OAuth secrets. Current mandates for us are 6 months.

Because we run on Azure / AKS, switching to federated credentials ("workload identities") with the app registrations made most of the pain go away because MS manages all the rotations (3 months) etc. If you're on managed AKS the OIDC issuer side is also automagic. And it's free. I think GCP offers something similar.

https://learn.microsoft.com/en-us/entra/workload-id/workload...

Browsers don't design for internal use though. They insist on HTTPS for various things that are intranet only, such as some browser APIs, PWAs, etc

As is already described by the comment thread we're replying in, "internal use" and "HTTPS" are very compatible. Corporations can run an internal CA, sign whatever internal certs they want, and trust that CA on their devices.

You use the term "internal use" and "corporations" like they're interchangable, but that's definitely not the case. Lots of small businesses, other organizations or even individuals want to have some internal services and having to "set up" a CA and add the certs to all client devices just to access some app on the local network is absurd!

The average small business in 2025 is not running custom on-premise infrastructure to solve their problems. Small businesses are paying vendors to provide services, sometimes in the form of on-premise appliances but more often in the form of SaaS offerings. And I'm happy to have the CAB push those vendors to improve their TLS support via efforts like this.

Individuals are in the same boat: if you're running your own custom services at your house, you've self-identified as being in the amazingly small fraction of the population with both the technical literacy and desire to do so. Either set up LetsEncrypt or run your own ACME service; the CAB is making clear here and in prior changes that they're not letting the 1% hold back the security bar for everybody else.

I don't think it's absurd and personally it feels easier to setup an internal CA than some of the alternatives.

In the hackiest of setups, it's a few commands to generate a CA and issue a wildcard certificate for everything. Then a single line in the bootstrap script or documentation for new devices to trust the CA and you're done.

Going a few steps further, setting up something like Hashicorp Vault is not hard and regardless of org size; you need to do secret distribution somehow.

> it's a few commands to generate a CA

My dad still calls my terminals a "DOS window" and doesn't understand why I don't use GUIs like a normal person. He has his own business. He absolutely cannot just roll out a CA for secure comms with his local printer or whatever. He literally calls me to help with buying a PDF reader

Myself, I'm employed at a small business and we're all as tech savvy as it gets. It took me several days to set it up on secure hardware (smartcard, figuring out compatibility and broken documentation), making sure I understand what all the options do and that it's secure for years to come and whatnot, working out what the procedure for issuing should be, etc. Eventually got it done, handed it over to the higher-up who gets to issue certs, distribute the CA cert to everyone... it's never used. We have a wiki page with TLS and SSH fingerprints

> My dad still calls my terminals a "DOS window" and doesn't understand why I don't use GUIs like a normal person. He has his own business. He absolutely cannot just roll out a CA for secure comms with his local printer or whatever. He literally calls me to help with buying a PDF reader

This is fair. I assumed all small businesses would be tech startups, haha.

The vast majority of companies operate just fine without understanding anything about building codes or vehicle repair etc.

Paying experts (Ed: setting up internal infrastructure) is a perfectly viable option so the only real question is the amount of effort involved not if random people know how to do something.

Paying an expert to come set up a local CA seems rather silly when you'd normally outsource operating one to the people who professionally run a CA

You’d only need internal certificates if someone had set up internal infrastructure. Expecting that person to do a good job means having working certificates be they internal or external.

> Paying experts is a perfectly viable option

Congrats for securing your job by selling the free internet and your soul.

I’m not going to be doing this, but I care about knowledge being free not labor or infrastructure.

If someone doesn’t want to learn then nobody needs to help them for free.

[deleted]

We have this, it's not trivial for some small team, and you have to deal with stuff like conda env coming with it's own set of certs so you have to take care of that. It's better then the alternative of fighting with browsers but still it's not without extra complexity

For sure, nothing is without extra complexity. But, to me, it feels like additional complexity for whoever does DevOps (where I think it should be) and takes away complexity from all other users.

[deleted]

Wow, amazing how out of touch this is.

Can you explain? I don't see why

You seem to think every business is a tech startup and is staffed with competent engineers.

Perhaps spend some time outside your bubble? I’ve read many of your comments and you just do seem to be caught in your own little world. “Out of touch” is apt and you should probably reflect on that at length.

> You seem to think every business is a tech startup and is staffed with competent engineers.

If we’re talking about businesses hosting services on some intranet and concerned about TLS, then yes, I assume it’s either a tech company or they have at least one competent engineer to host these things. Why else would the question be relevant?

> “Out of touch” is apt and you should probably reflect on that at length.

That’s a very weird personal comment based on a few comments on a website that’s inside a tech savvy bubble. Most people here work in IT, so I talk as if most people here work in IT. If you’re a mechanic at a garage or a lawyer at a law firm, I wouldn’t tell you rolling your own CA is easy and just a few commands.

You know, your perspective is valuable; I often operate as if the context is “all people everywhere”, which is rarely true and is definitely not true here. So I will take the error as mine and thank you for pointing it out :)

Sounds like there is a market for a browser that is intranet only and doesnt do various checks

Good luck getting that distributed everywhere including the iOS app store and random samsung TVs that stopped receiving updates a decade ago.

Not to mention the massive undertaking that even just maintaining a multi-platform chromium fork is.

Why would you want this? Then on production, you'll run into issues you did not encounter on staging because you skipped various checks.

Getting my parents to add a CA to their android, iphone, windows laptop and macbook just so they can use my self hosted nextcloud sounds like an absolute nightmare.

The nightmare only intensifies for small businesses that allow their users to bring their own devices (yes, yes, sacrilege but that is how small businesses operate).

Not everything is a massive enterprise with an army of IT support personnel.

Rolling out LetsEncrypt for a self-hosted Nextcloud instance is absolutely trivial. There are many reasons corporations might want to roll their own internal CA, but simple homelab scenarios like these couldn't be further from them.

Sure, which is what I do. But the point is that this is very much internal use and rolling my own CA for it is a nightmare.

Would you suggest something? I do this, but I'm not sure I would call maintaining my setup trivial. Got in trouble recently because my domain registrar deprecated an API call and it ends up that broke the camel's back in my automation setup. Or at least it did 90 days later.

I'm not a nextcloud user but have a homelab and use traefik for my reverse proxy which is configured to use letsencrypt dns challenges to issue wildcard certificates. I use cloudflares free plan to manage dns for my domains, although the registrar is different. This has been a set it and forgot solution for the last several years.

Let's Encrypt cert renewal comes out of the box on traefik? I haven't kept up with it. I'm on a similar set and forget schedule with configured nginx and some crowdsec stuff, but the API change ended up killing off an afternoon of my time.

Yep, it supports ACME (Let's Encrypt) out the box and many DNS providers too. I mainly use namecheap as my registrar but configure Cloudflare as my DNS resolver; I find this easier from a configuration perspective and CF APIs have been stable for me so far.

Traefik (by default) will attempt certificate renewal 30 days before expiry. Perhaps the defaults will change if the lifetime becomes 45 days. I don't think it's possible to override this value, without adjusting the certificate expiry days, but I've never felt the need to adjust it.

I actually do this for my homelab setup. Everyone basically gets the local CA installed for internal services as well as a client cert for RADIUS EAP-TLS and VPN authentication. Different devices are automatically routed to the correct VLAN and the initial onboarding doesn't take that long if you're used to the setup. Guests are issued a MSCHAP username and password for simplicity's sake.

For internal web services I could use just Let's Encrypt but I need to deploy the client certs anyways for network access and I might as well just use my internal cert for everything.

Personally I'd absolutely refuse to install your CA as your guest. That would give you far too much power to mint certificates for sites you have no business snooping on.

Guests don't install my CA as they don't need to access my internal services. If I wanted to set up an internal web server that's accessible to both guests and family members I'd use Let's Encrypt for that.

Why are your parents on a corporations internal network?

What corporation are you talking about? Have you never heard of someone self hosting software for their family and friends? You know, an intranet.

Just buy a domain and use dns verification to get real certs for whatever internal addresses you want to serve? Caddy will trivially go get certs for you with one line of config

Or cheat and use tailscale to do the whole thing.

Self hosting doesn’t usually apply connecting on a private network usually.

Do I add the root CA of my router manufacturer so I can visit its web interface on my internal network without having half the page functionality broken because of overbearing browser manufacturers who operate the "web PKI" as a cartel? This nowadays includes things such as basic file downloads.

[deleted]

> Corporations can run an internal CA

Having just implemented an internal CA, I can assure you, most corporations can’t just run an internal CA. Some struggle to update containers and tie their shoe laces.

Yeah, but essentially every home user can only do so after jumping through extremely onerous hoops (many of which also decrease their security when browsing the public web).

I’ve done it in the past, and it was so painful, I just bit the bullet and started accessing everything under public hostnames so that I can get auto-issued Letsencrypt certificates.

Indeed they are compatible. However HTTPS is often unnecessary, particularly in a smaller organisation, but browsers mandate significant unnecessary complexity there. In that sense, brwosers are not suited to this use in those scenarios.

If only browsers could understand something besides HTTPS. Somebody should invent something called HTTP that is like HTTPS without certificates.

Cool. And when they invent it, it should have browser parity with respect to which API features and capabilities are available, so that we don't need to use HTTPS just so things like `getUserMedia` work.

https://www.digicert.com/blog/https-only-features-in-browser...

There’s enough APIs limited to secure contexts that many internal apps become unfeasible.

Modern browsers default to trying https first.

I really don't see many scenarios where HTTPS isn't needed for at least some internal services.

Then, I'm afraid, you work in a bubble.

A static page that hosts documentation on an internal network does not need encryption.

The added overhead of certificate maintenance (and investigating when it does and will break) is simply not worth the added cost.

Of course the workaround most shops do nowadays is just hide the HTTP servers behind a load balancer doing SSL termination with a wildcard cert. An added layer of complexity (and now single point of failure) just to appease the WebPKI crybabies.

Unfortunately, for a small business, there are many software packages that can cause all sorts of havoc on an internal network, and are simple to install. Even just ARP cache poisoning on an internal network can force everyone offline, while even a reboot of all equipment can not immediately fix the problem. A small company that can't handle setting up a CA won't ever be able to handle exploits like this (and I'm not saying that a small company should be able to setup their own CA, just commenting on how defenseless even modern networks are to employees that like to play around or cause havoc).

Of course, then there are the employees who could just intercept HTTP requests, and modify them to include a payload to root an employee's machine. There is so much software out there that can destroy trust in a network, and it's literally download and install, then point and click with no knowledge. Seems like there is a market for simple and cheap solutions for internal networks, for small business. I could see myself making quite a bit off it, which I did in the mid-2000's, but I can't stand doing sales any more in my life, and dealing with support is a whole issue on it's own even with an automated solution.

What overhead?

Just about every web server these days supports ACME -- some natively, some via scripts, and you can set up your own internal CA using something like step-ca that speaks ACME if you don't want your certs going out to the transparency log.

The last few companies I've worked at had no http behind the scenes -- everything, including service-to-service communications was handled via https. It's a hard requirement for just about everything financial, healthcare, and sensitive these days.

> What overhead?

[proceeds to describe a bunch of new infrastructure and automation you need to setup and monitor]

So when ACME breaks - which it will, because it's not foolproof - the server securely hosting the cafeteria menus is now inaccessible, instead of being susceptible to interception or modification in transit. Because the guy that has owned your core switches is most concerned that everyone will be eating taco salad every day.

Sure it does! You may not need confidentiality, but what about integrity?

It's a very myopic take.

Someone that has seized control of your core network such that they were capable of modifying traffic, is not going to waste precious time or access modifying the flags of ls on your man page server. They will focus on more valuable things.

Just because something is possible in theory doesn't make it likely or worth the time invested.

You can put 8 locks on the door to your house but most people suffice with just one.

Someone could remove a piece of mail from your unlocked rural mailbox, modify it and put it back. Do you trust the mail carrier as much as the security of your internal network?

But it's not really a concern worth investing resources into for most.

> Someone that has seized control of your core network such that they were capable of modifying traffic, is not going to waste precious time or access modifying the flags of ls on your man page server. They will focus on more valuable things.

Ah, the "both me and my attackers agree on what's important" fallacy.

What if they modify the man page response to include drive-by malware?

I'm afraid you didn't read my response. I explicitly said I can't see a case where it isn't needed for some services. I never said it was required for every service. Once you've got it setup for one thing it's pretty easy to set it up everywhere (unless you're manually deploying, which is an obvious problem).

And it is even more trivial in a small organization to install a Trusted Root for internally signed certificates on their handful of machines. Laziness isn’t a browser issue.

How is that supposed to work for an IoT device that wants to work out of the box using one of these HTTPS-only browser APIs?

I am not saying I‘d do this, but in theory you could deploy a single reverse proxy in front of your HTTP-only devices and restrict traffic accordingly.

Desired by who?

There’s nothing stopping Apple and Google from issuing themselves certificates every 10 minutes. I get no value for doing this. Building out or expanding my own PKI for my company or setting up the infrastructure to integrate with Digicert or whomever gets me zero security and business value, just cost and toil.

Revocation is most often an issue when CAs fuck up. So now we collectively need to pay to cover their rears.

CAs fucking up every once in a while is inevitable. It is impossible to write guaranteed bug-free software or train guaranteed flawless humans.

The big question is what happens when (not "if") that happens. Companies have repeatedly shown that they are unable to rotate certs in time, to the point of even suing CAs to avoid revocation. They've been asked nicely to get their shit together, and it hasn't happened. Shortening cert lifetime to force automation is the inevitable next step.

Silly me, I’m just a customer, incapable of making my own risk assessments or prioritizing my business processes.

You’re portraying people suing CAs to get injunctions to avoid outages as clueless or irresponsible. The fact is Digicert’s actions, dictated by this CA/Browser forum were draconian and over the top responses to a minor risk. This industry trade group is out of control.

End of the day, we’re just pushing risk around. Running a quality internal PKI is difficult.

What do you mean “WebPKI … would like”. The browser vendors want one thing (secure, ubiquitous, etc), the CAs want a very different thing (expensive, confusing, etc)…

Problem is browsers will most likely follow the enforcement of short certificates so internal sites will be affected as well.

Non browser things usually don’t care even if cert is expired or trusted.

So I expect people still to use WebPKI for internal sites.

The browser policies are set by the same entities doing the CAB voting, and basically every prior change around WebPKI has only been enforced by browsers for CAs in the browser root trust stores. Which is exactly what's defined in this CAB vote as well.

Why would browsers "most likely" enforce this change for internal CAs as well?

Why would they? The old certificates will expire and the new ones will have short lifespans. Web browsers do not need to do anything.

That said, it would be really nice if they supported DANE so that websites do not need CAs.

'Most likely' - with the exception of Apple enforcing 825-day maximum for private/internal CAs, this change isn't going to affect those internal certificates.