I run my blog in unencrypted HTTP/1.1 just to make a point that we do not have to depend on third parties to publish content online.

And I noticed that Whatsapp is even worse than Chrome, it opens HTTPS even if I share HTTP links.

I applaud you for your principled stance. And of course this change in Chrome changes nothing wrt your ability to continue doing this.

Equally your preference for HTTP should not stand in the way of a more secure default for the average person.

Honestly I'd prefer that my mom didn't browse any http sites, it's just safer that way. But that doesn't detract from your ability to serve unencrypted pages which can easily be intercepted or modified by an ISP (or worse.)

Depend on one less third party, you still depend on the DNS Root servers, your ISP / hosting, domain registry, etc.

Third party root servers are generally used for looking up TLD nameservers, not for looking up domainnames registered to individuals publishing personal blogs^1

Fortunately, one can publish on the www without using ICANN DNS

For example http://199.233.217.201 or https://199.233.217.201

1. I have run own root server for over 15 years

An individual cannot even mention choosing to publish a personal blog over HTTP without being subjected to a kneejerk barrage of inane blather. This is truly a sad state of affairs

I'm experimenting with non-TLS, per packet encryption with a mechanism for built-in virtual hosting (no SNI) and collision-proof "domainnames" on the home network as a reminder that TLS is not the only way to do HTTPS

It's true we depend on ISPs for internet service but that's not a reason to let an unlimited number of _additional_ third parties intermediate and surveil everything we do over the internet

> inane blather

And this is why it's a good thing that every major browser will make it more and more painful, precisely so that instead of arguments about it, we'll just have people deciding whether they want their sites accessible by others or not.

Unencrypted protocols are being successfully deprecated.

You have some weird definition of "root".

https://en.wikipedia.org/wiki/Alternative_DNS_root, so you could (and people have/are) run your own root server.

[deleted]

Definition of "root server"

Authoritative DNS nameserver that serves root.zone, e.g., the one provided by ICANN, or maybe a customised one

In own case it is served only to me on local network

Many years ago, one of the former ICANN board members mentioned on his personal blog running his own root

[deleted]
[deleted]

People using the web can choose what software to use. This includes both client software and server software. Arguably the later ultimately determines whether HTTP is still available on the internet, regardless of whether it is used by any particular client software, e.g., a popular browser

One advertising company through its popular "free browser", a Trojan Horse to collect data for its own purposes, may attempt to "deprecate" an internet protocol by using its influence

But at least in theory such advertising companies are not in charge of such protocols, and whether the public, including people who write server software or client software, can use them or not

Let's Encrypt pushes me to run its self-updating certbot on my personal server, which is a big no-go.

I know about acme.sh, but still...

They're focused on the thing that'll get the most people up and running for the least extra work from them. When you say "push" do you just mean that's the default or are they trying to get you to not use another ACME client like acme.sh or one built in to servers you run anyway or indeed rolling your own?

Like, the default for cars almost everywhere is you buy one made by some car manufacturer like Ford or Toyota or somebody, but usually making your own car is legal, it's just annoyingly difficult and so you don't do that.

As a car mechanic, you could at least tune... until these days when tou can realistically tune only 10..15 years old models, because newer ones are just locked down computers on wheels.

>usually making your own car is legal

It may be legal but good luck ever getting registration for it.

It's actually not that bad in most states, some even have exceptions to emissions requirements for certain classes of self-built cars.

Now, getting required insurance coverage, that can be a different story. Btu even there, many states allow you to post a bond in lieu of an insurance policy meeting state minimums.

Usually making one car, or millions of cars, is doable.

It’s trying to make and sell three or four that is nearly impossible.

I counted by hand, so it might be wrong, but they appear to list and link to 86 different ACME client implementations across more than a dozen languages: https://letsencrypt.org/docs/client-options/

I've used their stuff since it came out and never used certbot, FWIW. If I were to set something up today, I'd probably use https://github.com/dehydrated-io/dehydrated.

Plus, it's one of the easier protocols to implement. I implemented it myself, and it didn't take long.

So you're absolutely not dependent on the client software, or indeed anyone else's client software.

There is a plethora of other clients besides certbot or acme.sh.

Let's Encrypt does not write or maintain certbot

ISRG (Let's Encrypt's parent entity) wrote Certbot, initially under the name "letsencrypt" but it was quickly renamed to be less confusing, and re-homed to the EFF rather than ISRG itself.

So, what you've said is true today, but historically Certbot's origin is tied to Let's Encrypt, which makes sense because initially ACME isn't a standard protocol, it's designed to become a standard protocol but it is still under development and the only practical server implementations are both developed by ISRG / Let's Encrypt. RFC 8555 took years.

Yes, it started that way, but complaining about the current auto-update behavior of the software (not the ACME protocol), is completely unrelated to Let's Encrypt and is instead an arbitrary design decision by someone at EFF.

As far as I remember, since the beginning certbot/let's encrypt client was a piece of crap especially regarding the autodownload and autoupdate (alias autobreak) behavior.

And I couldn't praise enough acme.sh at the opposite that is simple, dependency less and reliable!

Honestly I abandoned acme.sh as it was largely not simple (it’s a giant ball of shell) and that has led to it not being reliable (e.g. silently changing the way you configure acme dns instance URLs, the vendor building their compatibility around an RCE, etc)

Host an onion website at home using solar energy, and the only third party your website will depend on is your internet provider :)

Onion websites also don't need TLS (they have their own built-in encryption) so that solves the previous commenter's complaint too. Add in decentralized mesh networking and it might actually be possible to eliminate the dependency on an ISP too.

> they have their own built-in encryption

What does this mean? Is that encryption not reliant on any third parties, or is it just relying on different third parties?

The onion URL is itself a public key - https://protonmailrmez3lotccipshtkleegetolb73fuirgj7r4o4vfu7... for example.

Proton Mail burned CPU time until they found a public key that started the way they wanted it to.

So that is the public key for an HTTPS equivalent as part of the tor protocol.

You can ALSO get an HTTPS certificate for an onion URL; a few providers offer it. But it’s not necessary for security - it does provide some additional verification (perhaps).

If everyone who wants a human readable domain did this, it would environmentally irresponsible. Then 'typo' domains would be trivial. protonmailrmez31otcciphtkl or protonmailrmez3lotcciphtkl.

Its a shame these did put in a better built-in human readable url system. Maybe a free form text field 15-20 characters long appended to the public key and somehow be made part of that key. Maybe the key contains a checksum of those letters to verify the text field. So something like protonmail.rmez3lotcciphtkl+checksum.

But this being said, I think being a sort of independent 'not needing of third parties' ethic just isnt realistic. Its the libertarian housecat meme writ large. Once you're communicating with others and being part of a shared communal system, you lose that independence. Keeping a personal diary is independent. Anything past that is naturally communal and would involve some level of sharing, cooperation, and dependency on others.

I think this sort of anti-communal attitude is rooted in a lot of regressive stuff and myths of the 'man is an island' and 'great man' nonsense. Then leads to weird stuff like bizarre domain names and services no one likes to use. Outside of very limited use cases, tor just can't compete.

>If everyone who wants a human readable domain did this, it would environmentally irresponsible

Could we finally stop acting like we know how other people's energy is being produced?

And an army of volunteers and feds to run relays

What about the Tor directory authorities?

There is no magic do it all yourself. Communicating with people implies dependence.

I gave up trying to build a solar panel.

What about all the third parties running relays and exit nodes?

If you think about it the spirit of the internet is based on collaboration with other parties. If you want no third parties, there's always file: and localhost.

CAs are uniquely assertive about their right to cut off your access.

My hosting provider may accidentally fuck up, but they'll apologise and fix it.

My CA fucks up, they e-mail me at 7pm telling me I've got to fix their fuck-up for them by jumping through a bunch of hoops they have erected, and they'll only give me 16 hours to do it.

Of course, you might argue my hosting provider has a much higher chance of fucking up....

So what does "CA fixes the problem" look like in your head? Because they'll give you a new certificate right away. You have to install it, but you can automate that, and it's hard to imagine any way they could help that would be better than automation. What else do you want them to do? Asking them to not revoke incorrect or compromised certificates isn't good for maintaining security.

Imagine if, hypothetically speaking, the CA had given you a certificate based on a DNS-01 challenge, but when generating and validating the challenge record they'd forgotten to prefix it with an underscore. Which could have lead to a a certificate being issued to the wrong person if your website was a service like dyndns that lets users create custom subdomains.

Except (a) your website doesn't let users create custom subdomains; (b) as the certificate is now in use, you the certificate holder have demonstrated control over the web server as surely as a HTTP-01 challenge would; (c) you have accounts and contracts and payment information all confirming you are who you say you are; and (d) there is no suggestion whatsoever that the certificate was issued to the wrong person.

And you could have gotten a certificate for free from Lets Encrypt, if you had automatic certificate rotation in place - you paid $500 for a 12-month certificate because you don't.

An organisation with common sense policies might not need to revoke such a certificate at all, let alone revoke it with only hours of notice.

You didn't answer my question. What would the CA fixing it look like? Your hosting example had the company fix problems, not ignore them.

And have you seen how many actual security problems CAs have refused to revoke in the last few years? Holding them to their agreements is important, even if a specific mistake isn't a security problem [for specific clients]. Letting them haggle over the security impact of every mistake is much more hassle than it's worth.

> if you had automatic certificate rotation in place - you paid $500 for a 12-month certificate because you don't

Then in this hypothetical I made a mistake and I should fix it for next time.

And I should be pretty mad at my CA for giving me an invalid certificate. Was there an SLA?

CAs have to follow the baseline rules set by Google and Mozilla regarding incident response timelines. If they gave you more time, the browsers would drop them as a supported CA.

The CAs have to follow the baseline rules set by the CA/Browser Forum which CAs are voting members of.

Mark my words, some day soon an enterprising politician will notice the CA system can be drawn into trade sanctions against the enemy of the day....

The BRs already have a deliberate carve out where a CA can notify that their government requires them to break the rules and how they'll do that, and then the browsers, on behalf of relying parties can take whatever action they deem appropriate.

If you're required to (or choose to) not tell us about it, because of active monitoring when we notice it's likely your CA will be distrusted for not telling us, this is easier because there's a mechanism to tell us about it - same way that there's a way to officially notify the US that you're a spy, so, when you don't (because duh you're a spy) you're screwed 'cos you didn't follow the rules.

The tech centralization under the US government does mean there's a vulnerability on the browser side, but I wouldn't speculate about how long that would last if there's a big problem.

Doesn't that mean that technically, any node in the network between you and your reader can mutate the contents of the blog in-transit without anyone being the wiser (up to and including arbitrary JavaScript inline injection)?

Probably a low-threat security risk for a blog.

Yes, hotels were injecting ads on their free WiFi - https://news.ycombinator.com/item?id=3804608

ISPs have been known to do the same thing.

Devil's advocate, but maybe ISPs should all inject ads to make a point. They make money, and anyone using HTTP gets taught a free lesson on what MITM means

Before turning on the dude who thrives to keep the internet free, fix your corporate laptop that does MITM even for HTTPS connections.

"Free" for fraudsters to get their pickings, maybe.

I own a personal laptop?

I'd be happy if EU outlawed this instead of outlawing encryption.

But indeed, the ability to publish on my own outweights the risk of someone modding my content.

Most of us here read their news from work laptops, where the employer and their MiTM supplier are a much bigger threat even for HTTPS websites.

This puts the question into my brain, which I have never thought to pursue, of whether you could offer a self-signed cert that the user has to install for HTTPS.

Their client will complain loudly until and unless they install it, but then for those who care you could offer the best of both worlds.

Almost certainly more trouble than it's worth. G'ah, and me without any free time to pursue a weekend hobby project!

> for those who care you could offer the best of both worlds.

You're not really offering that because the first connection could've be intercepted.

Too true. The old model is that you have to sneaker-net that first step. To get someone's public key, you'd literally meet them in person and they'd hand you a copy. We don't do that anymore.

I can imagine alternate approaches (service that stores personal keys on an HTTPS server signed via a public cert, keys in peer-to-peer filesharing with the checksum provided side-channel), but that gets increasingly more elaborate for diminishing return.

For a blog, i think the bigger risk is pervasive surveilence - gov reads all the connections and puts you on a list if the thing you are reading has the wrong keyword in it.

Do you depend on a DNS root server to map your website name to your IP address? That's a third party.

There are ways to remove that dependency, but it's going to involve a decentralized DNS replacement like Namecoin or Handshake, many of which include their own built-in alternatives to the CA system too so if "no third parties" is something you truly care about you can probably kill two birds with one stone here.

Registrar is the big one, if yours decides to do a Google and randomly ban you and automatically decline your appeal with AI, you're stuffed.

This is generally my biggest concern. Not that I'm doing anything shady, I've wanted to setup a potentially politically charged site in the past.

why not make the point with at least a self signed cert?

99% of visitors wouldn't get the intended point - they'd think he's pro-cert, but forgot to renew it or something.

Honestly I don't even think technical users would get the 'point' most of the time.

Whenever I visit a HTTP-only site, I assume the administrator is either old and does not understand how to set up SSL, or it's an unmaintained/forgotten web server that hasn't been touched in about a decade.

> When ... I assume the ...

If it's (1) obviously recent content*, and (2) something that needs little security - a city council member's blog, or recipes - then how much do you care that it's HTTP-only?

*Or just date-insensitive

That's precisely the point of HTTPS, your harmless recipe site can start spreading malware without your knowledge if you make it HTTP, as the content can be changed by anyone it passes through.

> your harmless recipe site can ...

As can every recipe site with httpS - but a vulnerable WordPress plugin, or too-easy admin password, or malvertising, or a zillion other things.

But conveniently, "all sites gotta be httpS" puts the biggest part of the blame/load on the littlest little guys - who want to make and post good, unmonetized content. But don't have an IT skill set, nor want to deal with yet more admin overhead & costs.

It really doesn't matter if a personal blog decides to serve only http as a niche protest. But you really don't want to go back to the times when most sites were http; we had:

- Massive government spying programs, people forget that Chat Control used to be the standard, everything you ever browsed, posted or said online could be monitored

- Tracking that you could not disable, where your ISP would work with publishers appending http headers to every request that uniquely identified you.

- Not only little guys, as you say, were using http, it was government sites, news sites, a huge part of the internet was unencrypted and vulnerable to mitm. As you say, yes, it's not the only attack vector but it was one of the easiest to exploit, where any random wifi access point you're connected to could steal your credentials.

> But conveniently, "all sites gotta be httpS" puts the biggest part of the blame/load on the littlest little guys - who want to make and post good, unmonetized content. But don't have an IT skill set, nor want to deal with yet more admin overhead & costs.

Sure, but if you dont have the skills to self host you are using an online service and ~100% of them will do HTTPS for you.

If you are self hosting, HTTPS can take as little as zero configuration - I use Caddy and it does it for me.

Yea, but then every time I open your blog, all the middle-men receive some metadata about my device/browser/language etc, that helps to fingerprint my activity online.

There are dozens of us I guess that care about this kind of thing. I have never really understood the obsession with https for static content that I don't care if anyone can see I am reading like a blog post. HTTPS should be for things that matter, everything else can, and think should use HTTP when it is not necessary.

Depending on yet another third party to provide what is IMHO a luxury should not be required, and I have been continually confused as to why it is being forced down everyone's throat.

It’s static while you control it. Soon as I MIIT your content it will look to your users like you updated your site with a crypto miner and a credit card form. You can publish your site with a self-signed key if you’d like and only depend on your ISP/web host provider, DNS provider, domain registrar, and the makers of your host OS and web server and a few dozen other things.

> MIIT

Man in in the?

Man In Icy Tundra

Typos happen :)

> HTTPS should be for things that matter

If that were the universal state, then it would be easy to tell when someone was visiting a site that mattered, and you could probably infer a lot about it by looking at the cleartext of the non-HTTPS side they were viewing right before they went to it.

You can already see what site someone visits with HTTPS. It's in the Client Hello, and is important for things like L4 load balancing (e.g. HAProxy can look at the host to choose what backend to forward the TCP packets for that connection to without terminating TLS). It's also important for network operators (e.g. you at home) to be able to filter unwanted traffic (e.g. Google's).

I don't think seeing the site is too important, although there are TLS extensions to encrypt that, too[0]. In practice, a huge chunk of sites still have unique IPs, so seeing that someone is connecting to 1.2.3.4 gives you a pretty good idea of what site they're visiting. That's even easier if they're using plaintext DNS (i.e. instead of DoH) so that you can correlate "dig -t a example.com => 1.2.3.4" followed immediately by a TCP connection to 1.2.3.4. CDNs like Cloudflare can mitigate that for a lot of sites, but not millions of others.

However, the page you're fetching from that domain is encrypted, and that's vastly more sensitive. It's no big deal to visit somemedicinewebsite.com in a theocratic region like Iran or Texas. It may be a very big deal to be caught visiting somemedicinewebsite.com/effective-abortion-meds/buy. TLS blocks that bit of information. Today, it still exposes that you're looking at plannedparenthood.com, until if/when TLS_ECH catches on and becomes pervasive. That's a bummer. But you still have plausible deniability to say "I was just looking at it so I could see how evil it was", rather than having to explain why you were checking out "/schedule-an-appointment".

[0]https://developers.cloudflare.com/ssl/edge-certificates/ech/

The CIA’s website was a very early adopter of HTTPS across the board, for this very reason.

Most of the site hosted general information about the agency and its functions, but they also had a section where you could provide information.

Great point, and an excellent illustration. If it was trivial for an adversary to see that some people were visiting http://cia.gov/visitor-center-and-gift-shop-hours, but others were visiting https://cia.gov/[we-can't-see-this-part], they'd know exactly who to concentrate their rubber hose cryptography efforts on.

> But you still have plausible deniability to say "I was just looking at it so I could see how evil it was", rather than having to explain why you were checking out "/schedule-an-appointment".

TLS traffic analysis can still reveal which pages you accessed with some degree of confidence, based on packet sizes, timings, external resources that differ between pages (e.g. images)

https://blog.cloudflare.com/announcing-encrypted-client-hell...

Yes that's why I listed a couple reasons why adopting ECH everywhere is not straightforwardly all good. The network operator one in particular is I think quite important. It happens that the same company with the largest pushes for "privacy" (Google) has also been constantly making it more difficult to make traffic transparent to the actual device owner (e.g. making it so you can't just drop a CA onto your phone and have all apps trust it). Things like DoH, ECH, and ubiquitous TLS (with half the web making an opaque connection to the same Cloudflare IPs) then become weaponized against device owners.

AFAIK it's still not that widely adopted or can be easily blocked/disabled on a network though.

That sounds like an Android issue, not a TLS issue. If I need to break TLS I can add my own CA. Not having TLS is not the solution. Google will find other ways to take control from you.

There are good arguments for it, but it's also not a coincidence that they happen to align with Google's business objectives. Ex it's hard to issue a TLS cert without notifying Google of it.

I don't get your logic/reasoning here... could you explain?

There are public logs of every TLS cert issued by the major providers. This benefits Google.

Kinda like how Wikipedia benefits Google. Or public roads benefit Uber. Or clean water benefits restaurants

Certificate transparency logs are public. How does this benefit Google?

Google also knows about every domain name that gets renewed or registered... How does knowing a website has tls help in any meaningful way that would detract from society as a whole?

The certificate transparency log lets everyone know which domains are active as the certificates are getting renewed, likely more often than the domain itself, and also which sub-domains are active if those are not secured using a wild-card certificate.

Not just Google: AI bots could use the information to look for juicy new data to scrape and ingest.

Probably not a significant thing, the information can be derived in other ways too if someone wants to track these things, but it is a thing.

This doesn't feel like much of an argument in favor of not using https though.

Not at all IMO, unless you are really paranoid about Google & friends. I was just saying that what was being questioned does (or could) benefit them a tiny bit.

Just because you don't care doesn't mean nobody cares. I don't want anyone snooping on what I browse regardless of how "safe" someone thinks it is.

My navigation habits are boring but they are mine, not anyone else's to see.

A server has no way to know whether the user cares or not, so they are not in a position to choose the user's privacy preferences.

Also: a page might be fully static, but I wouldn't want $GOVERNMENT or $ISP or $UNIVERSITY_IT_DEPARTMENT to inject propaganda, censor... Just because it's safe for you doesn't mean it's safe for everyone.

And so we got The Usual Conversation:

"I want my communications to be as secure as practical."

"Ah, but they're not totally secure! Which means they're totally insecure! Which means you might as well write your bank statements on postcards and mail them to the town gossip!"

It amazes me how anti-HTTPS some people can be.

So... do you refuse to use the laptop supplied by your employer?

It does MITM between you and the HTTPS websites you browse.

It doesn't MITM anything. Do you see that as normal? Because I don't. We're adults here and I'm a tech guy, there's zero reason to control anything in my laptop.

In fact it's just a regular laptop that I fully control and installed from scratch, straight out of Apple's store. As all my company laptops have been.

And if it was company policy I would refuse indeed. I would probably not work there in the first place, huge red flag. If I really had to work there for very pressing reasons I would do zero personal browsing (which I don't do anyways).

Not even when I was an intern at random corpo my laptop was MITMed.

My work laptop has a CA from the organization installed and all HTTP(S) traffic is passed through a proxy server which filters all traffic and self-signs all domains with its' CA. It's relatively common for larger organizations. I've seen this in govt and banking.

To provide a European/Dutch perspective: I’m pretty sure that as a small employer myself, I am very much disallowed from using those mechanisms to actually inspect what employees are doing. Automated threat/virus scanning may be a legal gray zone, but monitoring-by-default is very much illegal, and there have been plenty of court cases about this. It is treated similarly to logging and reading all email, Slack messages, constantly screenrecording, or putting security cameras aimed at employees all day long. There may be exceptions for if specific fraud or abuse is suspected, but burden of proof is on the employer and just monitoring everyone is not justifiable even when working with sensitive data or goods.

So to echo a sister comment: while sadly it is common in some jurisdictions, it is definitely not normal.

I'm literally working for a local govt agency (via contracting company). I'm not sure that anything is being actively monitored so much as it's' blocking a number of sites (anything AI), upload sites, etc. As well as blocking POST actions to non-whitelisted sites/apps.

I've also seen similar configurations in Banking environments having done work for three major banking establishments over the years. The exception was when I was on a platform security team that managed access controls. Similarly at a couple of large airlines.

I know it's common, but I don't think it's "normal" even if it has been "normalized". I wouldn't subject myself to that. If my employer doesn't trust me to act like an adult I don't think it's the place for me.

I could maybe understand it for non-tech people (virus scanning yadda yadda) but for a tech person it's a nuisance at best.

So you want to double the infrastructure surface to provide a different route for access for developers which may be a tiny portion of users in an organization? That's privilege right there.

Edit: I'm not saying I like it this way... but that's what you get when working in a small org in a larger org in a govt office. When I worked in a security team for a bank, we actually were on a separate domain and network. I generally prefer to work untrusted, externally and rely on another team for production deployment workflows, data, etc.

Indeed I'm quite privileged.

I'm lucky to be a dev both by trade and passion. I like my job, it's cozy, and we're still scarce enough that my employer and I are in a business relationship as equals: I'm just a business selling my services to another business under common terms (which in my case include trusting each other).

Using an employer-issued computer for anything but work for that employer is foolish, for a multitude of legal and other reasons. Privacy is just one of them.

> So... do you refuse to use the laptop supplied by your employer?

For things other than work for my employer? Yes.

And work stuff doesn't touch my personal equipment, with the exception that I can connect to the company VPN from my personal laptop to remote to a work machine if I need to do DayJob work remote in an emergency when I don't have the company laptop with me.

> It does MITM between you and the HTTPS websites you browse.

My employer doesn't. Many don't.

Of course many do, but that is them controlling what happens on their equipment and they are usually up front about it. This is quite different to an ISP, shady WiFi operator, or other adversarial network node, inspecting and perhaps modifying what I look at behind my back.

This is still not that common but I used to work on a commercial web proxy that did exactly this. The only way it works is if the company pushes out a new root certificate via group policy (or something similar) so that the proxy can re-encrypt the data. Users can tell that this is being done by examining the certificate.

But this is mostly a waste of time, these days companies just install agents on each laptop to monitor activity. If you do not own the machine/network you are using then don’t visit sites hat you don’t want them to see.

The issue is that static content only sites do not exist - unless browsers change their stance to disabling long-relied-upon features like Javascript and embedded frames for content served over plain HTTP.

They've taken that strategy with newer enhancements (for instance, you can't use passkeys over non-secured channels), but the bar for widespread breakage of existing deployments is pretty high - even if changes like this make it harder to navigate to those existing deployments.

> The issue is that static content only sites do not exist

You’re exaggerating a bit. I have a static website that hasn’t changed in over 15 years. Okay, not completely static, as one page has a (static) HTML form that creates some file templates as a utility, but everything is working like it did in 2010. Except that I added TLS support at some point so that people don’t get scary warnings.

You just clearly don’t understand it is important that no one injects anything into your code while I am browsing it.

With http it is trivial.

So you say you don’t care if my ISP injects whole bunch of ads and I don’t even see your content but only the ads and I blame you for duping me into watching them.

Nowadays VPN providers are popular what if someone buys VPN service from the shitty ones and gets treated like I wrote above and it is your reputation of your blog devastated.

My ISP does not and if yours does, vote with your money or lobby your government to make this illegal.

And while at it, lobby to make corporate MiTM tools illegal as well.

Because if you are bothered about my little blog, you should be bothered that your employer can inspect all your HTTPS traffic.

Or you could do a much simpler thing and support HTTPS and not expect users to change ISPs (which is not always possible, e.g. in rural areas) or change laws (which is even less realistic) to browse your (or any other) blog. Injecting ads has nothing to do with corporate MITM, it's unquestionably bad, but unrelated here.

More to the point: serving your blog with HTTPS via Let's Encrypt does not in any way forbid you from also serving it with HTTP without "depending on third parties to publish content online". It would take away from the drama of the statement though, I suppose.

To add to that rouge ISP employees don’t care if it is illegal.

It's not just your ISP, it's anyone on the entire network path, and on most networks with average security that includes any device on your local network.

>There are dozens of us I guess...

Shine on you crazy diamond, and all that, but...

> I have been continually confused as to why it is being forced down everyone's throat.

Have you never sat on public wifi and tried to open an http site? These days it is highly likely to be MITM'd by the wifi provider to inject ads (or worse). Even residential ISPs that one pays for cannot be trusted not to inject content, if given the opportunity, because they noticed that they are monopolies and most users cannot do anything about it.

You don't get to choose the threat model of those who visit your site.

Have you ever opened your work laptop? It is likely MITM'd so that your employer can see everything you read and post on the internet and HTTPS won't help you.

So? I own more devices than a work laptop. I would like to have privacy and security on those.

Have you never sat on public wifi and tried to open an http site? These days it is highly likely to be MITM'd by the wifi provider to inject ads (or worse).

I honestly don't remember a single case where that happened to me. Internet user since 1997.

Which blog post? If it's anything remotely political or controversial, people have disappeared for that. You can always spot someone on HN who has never stepped outside their cushy life in a liberal democracy. The difference in mentality — between how "you" and "we" see the world — is crazy.

Agreed. I think that the push to make everything HTTPS is completely unnecessary, and in fact counterproductive to security. By throwing scary warnings in front of users when there is no actual security threat, we teach users that the scary warnings don't matter and they just should click past them. Warning when a site doesn't use TLS is a clear cut case of crying wolf.

> Warning when a site doesn't use TLS is a clear cut case of crying wolf.

No, it's a warning sign that you may be an active victim of an HTTPS downgrade attack where an attacker is blocking HTTPS communication and presenting you with an HTTP version of the website that you intended to visit, capturing and modifying any information you transmit and receive.

> By throwing scary warnings in front of users when there is no actual security threat

Most of these situations may be innocent but the problem is that they look identical to "actual security threats" so you don't have a choice. If there was a way to distinguish between them we/they would be doing it already.

What would the alternative be? Not warn users when they're about to login to a website that's pretending to be their bank?

Clearly the alternative is to return to HTTP, as these users are suggesting.

Surprised they're still posting, with their employers being shut down at the moment and all.

So instead of depending on one third party, you now depend on every Hotel, ISP and Cafe on the planet. Great compromise.

Would you tolerate using DANE?

I made this to redirect HTTPS to HTTP with whatsapp:

https://multiplayeronlinestandard.com/goto.html (the reason for the domain is I will never waste time on HTTPS but github does it automatically for free up to 100GB/month)

> I noticed that Whatsapp is even worse than Chrome, it opens HTT PS even if I share HTTP links.

Firefox does this when I type in a URL and the server is down. I absolutely hate this behaviour, because I run a bunch of services inside my network.

If I tell my browser ‘fetch http://site.example,’ I mean for it to connect to site.example on HTTP on port 80 nothing more. If there is a web server run ning which wants to redirect me to https://site.example, awesome, but my browser should never assume I mean anything I did not say.)

Sorry for the offtopicness but could you please email hn@ycombinator.com so I can explain why your comment only appeared here 14 hours after you tried 9 times to post it? :)

[deleted]

That's a good point to make, IMHO

What is funny about HTTPS is that early arguments for its existence IIRC were often along the lines of protecting credit card numbers and personal information that needed to be sent during e-commerce

HTTPS may have delivered on this promise. Of course HTTPS is needed for e-commerce. But not all web use is commercial transactions

Today, it's unclear who or what^2 HTTPS is really protecting anymore

For example,

- web users' credit card numbers are widely available, sold on black markets to anyone; "data breaches" have become so common that few people ask why the information was being collected and stored in the first place nor do they seek recourse

- web users' personal information is routinely exfiltrated during web use that is not e-commerce, often to be used in association with advertising services; perhaps the third parties conducting this data collection do not want the traffic to be optionally inspected by web users or competitors in the ad services business

- web users' personal information is shared from one third party to another, e.g., to "data brokers", who operate in relative obscurity, working against the interests of the web users

All this despite "widespread use of encryption", at least for data in transit, where the encryption is generally managed by third parties

When the primary use of third-party mediated HTTPS is to protect data collection, telemetry, surveillance and ad services delivery,^1 it is difficult for me to accept that HTTPS as implemented is primarily for protecting web users. It may benefit some third parties financially, e.g., CA and domainname profiteers, and it may protect the operations of so-called "tech" companies though

Personal information and behavioral data are surreptitiously exfiltrated by so-called "tech" companies whilst the so-called "tech" company's "secrets", e.g., what data they collect, generally remain protected. The companies deal in information they do not own yet operate in secrecy from its owners, relentlessly defending against any requests for transparency

1. One frequent argument for the use of HTTPS put forth by HN commenters has been that it prevents injection of ads into web pages by ISPs. Yet the so-called "tech" companies are making a "business" out of essentially the same thing: injecting ads, e.g., via real-time auctions, into web pages. It appears to this reader that in this context HTTPS is protecting the "business" of the so-called "tech" companies from competition by ISPs. Some web users do not want _any_ ads, whether from ISPs or so-called "tech" companies

2. I monitor all HTTPS traffic over the networks I own using a local forward proxy. There is no plaintext HTTP traffic leaving the network unless I permit it for a specific website in the proxy config. The proxy forces all traffic over HTTPS

If HTTPS were optionally under user control, certainly I would be monitoring HTTPS traffic being automatically sent from own computers on own network to Google by Chrome, Android, YouTube and so on. As I would for all so-called "tech" companies doing data collection, surveillance and/or ad services as a "business"

Ideally one would be able to make an informed decision whether they want to send certain information to companies like Google. But as it stands, with the traffic sometimes being protected from inspection _by the computer owner_, through use of third party-mediated certificates, the computer owner is prevented from knowing what information is being sent

In own case, that traffic just gets blocked