I'm on the team at Let's Encrypt that runs our CA, and would say I've spent a lot of time thinking about the tradeoffs here.
Let's Encrypt has always self-imposed a 90 day limit, though of course with this ballot passing we will now have to reduce that under 47 days in the future.
Shorter lifetimes have several advantages:
1. Reduced pressure on the revocation system. For example, if a domain changes hands, then any previous certificates spend less time in the revoked state. That makes CRLs smaller, a win for everyone involved.
2. Reduced risk for certificates which aren't revoked but should have been, perhaps because a domain holder didn't know that a previous holder of that domain had it, or an attack of any sort that led to a certificate being issued that wasn't desired.
3. For fully short-lived certs (under 7 days), many user-agents don't do revocation checks at all, because that's a similar timeline to our existing revocation technology taking effect. This is a performance win for websites/user-agents. While we advocate for full certificate automation, I recognize there are cases where that's not so easy, and doing a monthly renewal may be much more tractable.
Going to shorter than a few days is a reliability and scale risk. One of the biggest issues with scale today is that Certificate Transparency logs, while providing great visibility into what certs exist (see points 1 and 2), will have to scale up significantly as lifetimes are cut.
Why is this happening now, though? I can't speak for everyone, and this is only my own opinion on what I'm observing, but: One big industry problem that's been going on for the last year or two is that CAs have found themselves in situations where they need to revoke certificates because of issues with those certificates, but customers aren't able to respond on an appropriate timeline. So the big motivation for a lot of the parties here is to get these timelines down and really prove a push towards automation.
When I first set up Let's Encrypt I thought I'd manually update the cert one per year. The 90 day limit was a surprise. This blog post helped me understand (it repeats many of your points) https://letsencrypt.org/2015/11/09/why-90-days/
So it's being pushed because it'll be easier for a few big players in industry. Everybody else suffers.
It's a decision by Certificate Authorities, the ones that sell TLS certificate services, and web browser vendors. One benefits from increased demand on their product, while the other benefits by increasing the overhead on the management of their software, which increases the minimum threshold to be competitive.
There are security benefits, yes. But as someone that works in infrastructure management, including on 25 or 30 year old systems in some cases, it's very difficult to not find this frustrating. I need tools I will have in 10 years to still be able to manage systems that were implemented 15 years ago. That's reality.
Doubtless people here have connected to their router's web interface using the gateway IP address and been annoyed that the web browser complains so much about either insecure HTTP or an unverified TLS certificate. The Internet is an important part of computer security, but it's not the only part of computer security.
I wish technical groups would invest some time in real solutions for long-term, limited access systems which operate for decades at a time without 24/7 access to the Internet. Part of the reason infrastructure feels like running Java v1.3 on Windows 98 is because it's so widely ignored.
It is continuously frustrating to me to see the arrogant dismissiveness which people in charge of such technical groups display towards the real world usage of their systems. It's some classic ivory tower "we know better than you" stuff, and it needs to stop. In the real world, things are messy and don't conform to the tidy ideas that the Chrome team at Google has. But there's nothing forcing them to wake up and face reality, so they keep making things harder and harder for the rest of us in their pursuit of dogmatic goals.
An example of that was the dismissal of privacy concerns by the committee with using MAC addresses in ipv6 addresses.
Aren’t temporary addresses a thing now?
It astounds me that there's no non-invasive local solution to go to my router or whatever other appliances web page without my browser throwing warnings and calling it evil. Truly a fuck up(purposeful or not) by all involved in creating the standards. We need local TLS without the hoops.
Simplest possible, least invasive, most secure thing I can think of: QR code on the router with the CA cert of the router. Open cert manager app on laptop/phone, scan QR code, import CA cert. Comms are now secure (assuming nobody replaced the sticker).
The crazy thing? There is already two WiFi QR code standards, but they do not include the CA cert. There's a "Wi-Fi Easy Connect" standard that is intended to secure the network for an enterprise, and there's a random Java QR code library that made their own standard for just encoding an access point and WPA shared key (and Android and iOS both adopted it, so now it's a de-facto standard).
End-user security wasn't a consideration for either of them. With the former they only cared about protecting the enterprise network, and with the latter they just wanted to make it easier to get onto a non-Enterprise network. The user still has to fend for themselves once they're on the network.
This is a terrible solution. Now you require an Internet connection and a (non-abandoned) third party service to configure a LAN device. Not to mention countless industrial devices where operators would typically have no chance to see QR code.
The solution I just mentioned specifically avoids an internet connection or third parties. It's a self-signed cert you add to your computer's CA registry. 100% offline and independent of anything but your own computer and the router. The QR code doesn't require an internet connection. And the first standard I mentioned was designed for industrial devices.
Not only would that set a questionable precedent if users learn to casually add new trust roots, it would also need support for new certificate extensions to limit validity to that device only. It's far from obvious that would be a net gain for Internet security in general.
It might be easier to extend the URL format with support for certificate fingerprints. It would only require support in web browsers, which are updated much faster than operating systems. It could also be made in a backwards compatible way, for example by extending the username syntax. That way old browsers would continue to show the warning and new browsers would accept the self signed URL format in a secure way.
Have you seen the state of typical consumer router firmwares? Security hasn’t been a serious concern for a decade plus.
They only stopped using global default passwords because people were being visibly compromised on the scale of millions at a time.
Good point. There are exceptions though. Eero, for example.
Your router should use acme with a your-slug.network.home (a communal one would be nice, but more realistically some vendor specific domain suffix that you could cname) domain name and then you should access it via that, locally. your router should run ideally splitbrain dns for your network. if you want you can check a box and make everything available globally via dns-sd.
Wouldn't that allow the router to MITM all encrypted data that goes through it?
If it were a CA cert yes. It could instead be a self-signed server (non-CA) cert, that couldn't be used for requests to anything else.
All my personal and professional feelings aside (they are mixed) it would be fascinating to consider a subnet based TLS scheme. Usually I have to bang on doors to manage certs at the load balancer level anyway.
I’ve actually put a decent amount of thought into this. I envision a raspberry pi sized device, with a simple front panel ui. This serves as your home CA. It bootstraps itself witha generated key and root cert and presents on the network using a self-issued cert signed by the bootstrapped CA. It also shows the root key fingerprint on the front panel. On your computer, you go to its web UI and accept the risk, but you also verify the fingerprint of the cert issuer against what’s displayed on the front panel. Once you do that, you can download and install your newly trusted root. Do this on all your machines that want to trust the CA. There’s your root of trust.
Now for issuing certs to devices like your router, there’s a registration process where the device generates a key and requests a cert from the CA, presenting its public key. It requests a cert with a local name like “router.local”. No cert is issued but the CA displays a message on its front panel asking if you want to associate router.local with the displayed pubkey fingerprint. Once you confirm, the device can obtain and auto renew the cert indefinitely using that same public key.
Now on your computer, you can hit local https endpoints by name and get TLS with no warnings. In an ideal world you’d get devices to adopt a little friendly UX for choosing their network name and showing the pubkey to the user, as well as discovering the CA (maybe integrate with dhcp), but to start off you’d definitely have to do some weird hacks.
This is incredibly convoluted scenario for a use case with near zero chance of a MITM attack. Security ops is cancer.
Please tell me this is satire.
What can I say, I am a pki nerd and I think the state of local networking is significantly harmed by consumer devices needing to speak http (due to modern browsers making it very difficult to use). This is less about increasing security and more about increasing usability without also destroying security by coaching people to bypass cert checks. And as home networks inevitably become more and more crowded with devices, I think it will be beneficial to be able to strongly identify those devices from the network side without resorting to keeping some kind of inventory database, which nobody is going to do.
It also helps that I know exactly how easy it is to build this type of infrastructure because I have built it professionally twice.
Why should your browser trust the router's self-signed certificate? After you verify that it is the correct cert you can configure Firefox or your OS to trust it.
Because local routers by definition control the (proposed?) .internal TLD, while nobody controls the .local mDNS/Zeroconf one, so the router or any local network device should arguably be trusted at the TLS level automatically.
Training users to click the scary “trust this self-signed certificate once/always” button won’t end well.
Honestly, I'd just like web browsers to not complain when you're connecting to an IP on the same subnet by entering https://10.0.0.1/ or similar.
Yes, it's possible that the system is compromised and it's redirecting all traffic to a local proxy and that it's also malicious.
It's still absurd to think that the web browser needs to make the user jump through the same hoops because of that exceptional case, while having the same user experience as if you just connected to https://bankofamerica.com/ and the TLS cert isn't trusted. The program should be smarter than that, even if it's a "local network only" mode.
Certificates protect against man in the middle attacks and those are a thing on local networks.
I wonder what this would look like: for things like routers, you could display a private root in something like a QR code in the documentation and then have some kind of protocol for only trusting that root when connecting to the router and have the router continuously rotate the keys it presents.
Yeah, what they'll do is put a QR code on the bottom, and it'll direct you to the app store where they want you to pay them $5 so they can permanently connect to your router and gather data from it. Oh, and they'll let you set up your WiFi password, I guess.
That's their "solution".
I wonder if a separate CA would be useful for non-public-internet TLS certificates. Imagine a certificate that won't expire for 25 years issued by it.
Such a certificate should not be trusted for domain verification purposes, even though it should match the domain. Instead it should be trusted for encryption / stream integrity purposes. It should be accepted on IPs outside of publicly routable space, like 192.0.0/24, or link-local IPv6 addresses. It should be possible to issue it for TLDs like .local. It should result in a usual invalid certificate warning if served off a pubic internet address.
In other words, it should be handled a bit like a self-signed certificate, only without the hassle of adding your handcrafted CA to every browser / OS.
Of course it would only make sense if a major browser would trust this special CA in its browser by default. That is, Google is in a position to introduce it. I wonder if they may have any incentive though. (To say nothing of Apple.)
But what would be the value of such a certificate over a self-signed one? For example, if the ACME Router Corp uses this special CA to issue a certificate for acmerouter.local and then preloads it on all of its routers, it will sooner or later be extracted by someone.
So in a way, a certificate the device generates and self-signs would actually be better, since at least the private key stays on the device and isn’t shared.
The value: you open such an URL with a bog standard, just-installed browser, and the browser does not complain about the certificate being suspicious.
The private key of course stays within the device, or anywhere the certificate is generated. The idea is that the CA from which the certificate is derived is already trusted by the browser, in a special way.
Compromise one device, extract the private key, have a "trusted for a very long time" cert that identifies like devices of that type, sneak it into a target network for man in the middle shenanigans.
If someone does that you’ve already been pwned. In reality you limit the CA to be domain scoped. I don’t know why domain-scoped CAs aren’t a thing.
> If someone does that you’ve already been pwned
Then why use encryption at all when your threat model for encrypted communication can't handle a malicious actor on the network?
Because there are various things in HTML and JS that require https.
(Though getting the browser to just assume http to local domains is secure like it already does for http://localhost would solve that)
Yes
Old cruft dying there for decades
That's the reality and that's an issue unrelated to TLS
Running unmanaged compute at home (or elsewhere ..) is the issue here.
Frankly, unless 25 and 30 year old systems are being continually updated to adhere to newer TLS standards then they are not getting many benefits from TLS.
Practically, the solution is virtual machines with the compatible software you'll need to manage those older devices 10 years in the future, or run a secure proxy for them.
Internet routers are definitely one of the worst offenders because originating a root of trust between disparate devices is actually a hard problem, especially over a public channel like wifi. Generally, I'd say the correct answer to this is that wifi router manufacturers need to maintain secure infrastructure for enrolling their devices. If manufacturers can't bother to maintain this kind of infrastructure then they almost certainly won't be providing security updates in firmware either, so they're a poor choice for an Internet router.
It is reasonable for the WebPKI of 2025 to assume that the Internet encompasses the entire scope of its problem.
If the web browsers would adopt DANE, we could bypass CAs and still have TLS.
A domain validated secure key exchange would indeed be a massive step up in security, compared to the mess that is the web PKI. But it wouldn't help with the issue at hand here: home router boostrap. It's hard to give these devices a valid domain name out of the box. Most obvious ways have problems either with security or user friendliness.
Or, equivalently, it's being pushed because customers of "big players", of which there are a great many, are exposed to security risk by the status quo that the change mitigates.
It makes the system more reliable and more secure for everyone.
I think that's a big win.
The root reason is that revocation is broken, and we need to do better to get the security properties we demand of the Web PKI.
> It makes the system more reliable
It might in theory but I suspect it's going to make things very very unreliable for quite a while before it (hopefully) gets better. I think probably already a double digit fraction of our infrastructure outages are due to expired certificates.
And because of that it may well tip a whole class of uses back to completely insecure connections because TLS is just "too hard". So I am not sure if it will achieve the "more secure" bit either.
It makes systems more reliable and secure for system runners that can leverage automation for whatever reason. For the same reason, it adds a lot of barriers to things like embedded devices, learners, etc. who might not be able to automate TLS checks.
Putting a manually generated cert on an embedded device is inherently insecure, unless you have complete physical control over the device.
And as mentioned in other comments, the revocation system doesn't really work, and reducing the validity time of certs reduces the risks there.
Unfortunately, there isn't really a good solution for many embedded and local network cases. I think ideally there would be an easy way to add a CA that is trusted for a specific domain, or local ip address, then the device can generate its own certs from a local ca. And/or add trust for a self-signed cert with a longer lifetime.
This is a bad definition of security, I think. But you could come up with variations here that would be good enough for most home network use cases. IMO, being able to control the certificate on the device is a crucial consumer right
Real question: What is the correct way to handle certs on embedded devices? I never thought about it before I read this comment.
There are many embedded devices for which TLS is simply not feasible. For remote sensing, when you are relying on battery power and need to maximise device battery life, then the power budget is critical. Telemetry is the biggest drain on the power budget, so anything that means spending more time with the RF system powered up should be avoided. TLS falls into this category.
Yes, but the question is about devices that can reasonably run TLS.
The answer is local acme with your router issuing certs for your ULA prefix or “home zone” domain.
> The answer is local acme with your router issuing certs for your ULA prefix or “home zone” domain.
That would be nice. But most people don't have a router running an ACME server.
It should become a thing
And rather than fix the issues with revocation, its being shuffled off to the users.
Good example of enshittification
"Suffer" is a strong word for those of us who've been using things like Let's Encrypt for years now without issue.
Unless I misunderstood, GP mentions that the problem stems from WebPKI's central role in server identity management. Think of these cert lifetimes as forcefully being signed out after 47 days of being signed in.
> easier for a few big players in industry
Not necessarily. OP mentions, more certs would mean bigger CT logs. More frequent renewals mean more load. Like with everything else, this seems like a trade-off. Unfortunately, for you & I, as customers of cert authorities, 47 days is where the now the agreed cut-off is (not 42).
I think a very short lived cert (like 7 days) could be a problem on renewal errors/failures that don't self correct but need manual intervention.
What will let's encrypt be like with 7day certs? Will it renew them every day(6 day reaction time), or every 3 days (4 days reaction time). Not every org is suited with 24/7 staffing, some people go on holidays, some public holidays extend to long weekends etc :). I would argue that it would be a good idea to give people a full week to react to renewal problems. That seems impossible for short lived certs.
Like Apple case. Apple already ask their developer to re-sign the app every 7 days. It should not be the problem.
That’s only a thing if you are not publishing on Apple Store, no?
Correct. Or if you’re not using an enterprise distribution cert.
Since you’ve thought about it a lot, in an ideal world, should CAs exist at all?
There's no such thing as an ideal world, just the one we have.
Let's Encrypt was founded with a goal of rapidly (within a few years) helping get the web to as close to 100% encrypted as we could. And we've succeeded.
I don't think we could have achieved that goal any way other than being a CA.
Sorry was not trying to be snarky, was interested in your answer as to what a better system would look like. The current one seems pretty broken but hard to fix.
In an ideal world where we rebuilt the whole stack from scratch, the DNS system would securely distribute key material alongside IP addresses and CAs wouldn't be needed. Most modern DNS alternatives (Handshake, Namecoin, etc) do exactly this, but it's very unlikely any of them will be usurping DNS anytime soon, and DNS's attempts to implement similar features have been thus far unsuccessful.
People who idealize this kind of solution should remember that by overloading core Internet infrastructure (which is what name resolution is) with a PKI, they're dooming any realistic mechanism that could revoke trust in the infrastructure operators. You can't "distrust" .com. But the browsers could distrust Verisign, because Verisign had competitors, and customers could switch transparently. Browser root programs also used this leverage to establish transparency logs (though: some hypothetical blockchain name thingy could give you that automatically, I guess; forget about it with the real DNS though).
.com can issue arbitrary certificates right now, they control what DNS info is given to the CAs. So I don't quite see the change apart from people not talking about that threat vector atm.
Get one to issue a Google.com certificate and see what happens.
This is a bad faith argument. Whatever measures Google takes to prevent this (certificate logs and key pinning) could just as well be utilized if registrars delegated cryptographic trust as they delegate domains.
It is also true that these contemporary prevention methods only help the largest companies which can afford to do things like distributing key material with end user software. It does not help you and me (unless you have outsourced your security to Google already, in which case there is the obvious second hand benefit). Registrars could absolutely help a much wider use of these preventions.
There is no technical reason we don't have this, but this is one area where the interest of largest companies with huge influence over standards and security companies with important agencies as customers all align, so the status quo is very slow to change. If you squint you can see traces of this discussion all the way from IPng to TLS extensions, but right now there is no momentum for change.
It's easy to tell stories about shadowy corporate actors retarding security on the Internet, but the truth is just that a lot of the ideas people have about doing security at global Internet scale just don't pan out. You can look across this thread to see all the "common sense" stuff people think should replace the WebPKI, most of which we know won't work.
Unfortunately, when you're working at global scale, you generally need to be well-capitalized, so it's big companies that get all the experience with what does and doesn't work. And then it's opinionated message board nerds like us that provide the narratives.
This is a great point. For all of the "technically correct" arguments going on here, this one is the most practical counterpoint. Yes, in theory, Verisign (now Symantec) could issue some insane wildcard Google.com cert and send the public-private key pair to you personally. In practice, this would never happen, because it is a corporation with rules and security policies that forbid it.
Thinking deeper about it: Verisign (now Symantec) must have some insanely good security, because every black hat nation state actor would love to break into on their cert issuance servers and export a bunch of legit signed certs to run man-in-the-middle attacks against major email providers. (I'm pretty sure this already happened in Netherlands.)
> every black hat nation state actor would love to break into on their cert issuance servers and export a bunch of legit signed certs to run man-in-the-middle attacks
I might be misremembering but I thought one insight from the Snowden documents was that a certain three-letter agency had already accomplished that?
This was DigiNotar. The breach generated around 50 certificates, including certificates for Google, Microsoft, MI6, the CIA, TOR, Mossad, Skype, Twitter, Facebook, Thawte, VeriSign, and Comodo.
Here is a nice writeup for that breach: https://www.securityweek.com/hacker-had-total-control-over-d...
Edits: I believe this is what you were referring to. It was around 500, not 50. Dropped a 0.
I do remember that breach but that was before Snowden. I'm relatively sure Snowden published some document about the NSA trying to undermine CAs, too.
This isn't about the cert issuance servers, but DNS servers. If you compromise DNS then just about any CA in the world will happily issue you a cert for the compromised domain, and nobody would even be able to blame them for that because they'd just be following the DNS validation process prescribed in the BRs.
Verisign (Symantec) can't do anything, because the browsers distrusted them.
That only works for high profile domains, a CA can just issue a cert, log it to CT and if asked claim they got some DNS response from the authoritative server. Then it's a he said she said problem.
Or is DNSSEC required for DV issuance? If it is, then we already rely on a trustworthy TLD.
I'm not saying there isn't some benefit in the implicit key mgmt oversight of CAs, but as an alternative to DV certs, just putting a pubkey in dnssec seems like a low effort win.
It's been a long time since I've done much of this though, so take my gut feeling with a grain of salt.
DNSSEC isn't required by anything, because almost nobody uses it.
Right. 'You can't "distrust" .com.' is probably not true in that situation. (If it were actually true, then "what happens" would be absolutely nothing.) I think you're undermining your own point.
Who cares? What does a certificate tell me other than someone paid for a certificate.
And what do certificate buyers gain? The ability for their site to be revoked or expired and thus no longer work.
I’d like to corrected.
Are you seriously not aware of Let's Encrypt? https://letsencrypt.org/
Nobody has really had to pay for certificates for quite a number of years.
What certificates get you, as both a website owner and user, is security against man-in-the-middle attacks, which would otherwise be quite trivial, and which would completely defeat the purpose of using encryption.
A certificate authority is an organisation that pays good money to make sure that their internet connection is not being subjected to MITMs. They put vastly more resources into that than you can.
A certificate is evidence that the server you're connected to has a secret that was also possessed by the server that the certificate authority connected to. This means that whether or not you're subject to MITMs, at least you don't seem to be getting MITMed right now.
The importance of certificates is quite clear if you were around on the web in the last days before universal HTTPS became a thing. You would connect to the internet, and you would somehow notice that the ISP you're connected to had modified the website you're accessing.
> pays good money to make sure that their internet connection is not being subjected to MITMs
Is that actually true? I mean, obviously CAs aren't validating DNS challenges over coffee shop Wi-Fi so it's probably less likely to be MITMd than your laptop, but I don't think the BRs require any special precautions to assure that the CA's ISP isn't being MITMd, do they?
No they should not
DANE is the way (https://en.wikipedia.org/wiki/DNS-based_Authentication_of_Na...)
But no browser have support for it, so .. :/
Much as I like the idea of DANE, it solves nothing by itself and you need to sign the zone from tampering. Right now, the dominant way to do that is DNSSEC, though DNSCurve is a possible alternative, even if it doesn't solve the exact same problem. For DANE to be useful, you'd first need to get that set up on the domain in question, and the effort to get that working is far, far from trivial, and even then, the process is so error prone and brittle that you can easily end up making a whole zone unusable.
Further, all you've done is replace one authority (the CA authority) with another one (the zone authority, and thus your domain registrar and the domain registry).
The zone authority already superseeds the CA authority in all ways
When I manage a DNS zone, I'm free to generate all certificates I want
This is a great question. If we don't have CAs, how do we know if it OK to trust a cert?
Are there any reasonable alternatives to CAs in a modern world? I have never heard any good proposals.
There are some alternatives.
Certificate pinning is probably the most widely known way to get a certificate out there without relying on live PKI. However, certificate pinning just shifts the burden of trust from runtime to install time, and puts an expiration date on every build of the program. It also doesn't work for any software that is meant to access more than a small handful of pre-determined sites.
Web-of-trust is a theoretical possibility, and is used for PGP-signed e-mail, but it's also a total mess that doesn't scale. Heck, the best way to check the PGP keys for a lot of signed mail is to go to an HTTPS website and thus rely on the CAs.
DNSSEC could be the basis for a CA-free world, but it hasn't achieved wide use. Also, if used in this way, it would just shift the burden of trust from CAs to DNS operators, and I'm not sure people really like those much better.
In an ideal world we could just trust people not to be malicious, and there wouldn't be any need to encrypt traffic at all.
How relevant is that since we don't live in such a world? Unless you have a way to get to to such a world, of course, but even then CAs would need to keep existing until you've managed to bring the ideal world about. It would be a mistake to abolish them first and only then start on idealizing the world.
CAs exist on the intersection of reality (far from ideal) and cryptography.
What alternatives come to mind when asking that question? Not being in the PKI world directly, web of trust is what comes to mind, but I'm curious what your question hints at.
I honestly don’t know enough about it to have an opinion, have vague thoughts that dns is the weak point anyway for identity so can’t certs just live there instead but I’m sure there are reasons (historical and practical).
4. Encrypted traffic hoarders would have to break more certs.
This one?
https://en.wikipedia.org/wiki/Utah_Data_Center
I love the push that LE puts on industry to get better.
I work in a very large organisation and I just dont see them being able to go to automated TLS certificates for their self managed subdomains, inspection certificates, or anything else for that matter. It will be interesting to see how the short lived certs are adopted into the future.
Could you explain why Let's Encrypt is dropping OCSP stabling support, instead of dropping it for must-staple only certificates and letting those of us who want must-staple to deal with the headaches? I believe that resolving the privacy concerns involving OCSP raised did not require eliminating must-staple.
All of that in case the previous owner of the domain would attempt a mitm attack against a client of the new owner, which is such a remote scenario. In fact has it happened even once?
How viable are tls attacks, assuming a signed private cert is compromised, you need network position or other things to trigger routing, no?
So for a bank, a private cert compromise is bad, for a regular low traffic website, probably not so much?
Are you aware of a single real world not theoretical security breach caused by an unrevoked certificate that lived too long?
A real-world example of this would be Heartbleed, where users rotated without revoking their previously compromised certificates[1].
[1]: https://en.wikipedia.org/wiki/Heartbleed#Certificate_renewal...
Was a single certificate actually compromised and/or used maliciously? I am looking for an actual breach, not a theoretical scenario.
Based on that Wikipedia article, no. This is just more of the same friendless PKI geeks making the world unnecessarily more complicated. The only other people that benefit are the certificate management companies that sell more software to manage these insane changes.
Did you read it? There are multiple examples of claimed exploitation right below the section I linked.
Which bit says about stealing a certificate/keys and MITMing traffic with the stolen keys - with real world ramifications?
There are multiple examples of service compromise in the linked Wikipedia page.
Realistically, how often are domains traded and suddenly put in legitimate use (that isn't some domain parking scam) that (1) and (2) are actual arguments? Lol
Domain trading (regardless if the previous use was legitimate or not) is only one example, not the sole driving argument for why the revocation system is in place or isn't perfectly handled.
> but customers aren't able to respond on an appropriate timeline
Sounds like your concept of the customer/provider relationship is inverted here.
No. The customer is violating their contract.
The whole "customer is king" doesn't apply to something as critical as PKI infrastructure, because it would compromise the safety of the entire internet. Any CA not properly applying the rules will be removed from the trust stores, so there can be no exceptions for companies who believe they are too important to adhere to the contract they signed.
How would a CA not being able to contact some tiny customer (surely the big ones all can and do respond in less than 90 days?) compromise the safety of the entire internet?
And if the safety of the entire internet is at risk, why is 47 days days an acceptable duration for this extreme risk, but 90 days is not?
> surely the big ones all can and do respond in less than 90 days?
LOL. old-fashioned enterprises are the worst at "oh, no, can't do that, need months of warning to change something!", while also handling critical data. A major event in the CA space last year was a health-care company getting a court order against a CA to not revoke a cert that according to the rules for CAs the CA had to revoke (in the end they got a few days extension, everyone grumbled and the CA got told to please write their customer contracts more clearly, but the idea is out there and nobody likes CAs doing things they are not supposed to, even if through external force).
One way to nip that in the bud is making sure even you get your court order preventing the CA from doing the right thing, your certificate will expire soon anyways, so "we are too important to have working IT processes" doesn't work anymore.
Can you de-anonymize that event for me? Wasn't able to find it given the lack of unique keywords to search for.
sure!
News report: https://www.heise.de/en/news/DigiCert-Customer-seeks-to-exch...
nitty-gritty bugzilla: https://bugzilla.mozilla.org/show_bug.cgi?id=1910322#c8
some follow-on drama: https://news.ycombinator.com/item?id=43167087
I have a feeling it'll eventually get down even lower. In 2010 you could pretty easily get a cert for 10 years. Then 5 years. Then 3 years. Then 2 years. then 1 year. Then 3 months. Now less than 2 months .