This is why I've blocked all HTTP traffic outgoing from my machines.

A lot of people have brought this up over the years:

https://www.reddit.com/r/AMDHelp/comments/ysqvsv/amd_autoupd...

(I'm fairly sure I have even mentioned AMD doing this on HN in the past.)

AMD is also not the only one. Gigabyte, ASUS, many other autoupdaters and installers fail without HTTP access. I couldn't even set up my HomePod without allowing it to fetch HTTP resources.

From my own perspective allowing unencrypted outgoing HTTP is a clear indication of problematic software. Even unencrypted (but maybe signed) CDN connections are at minimum a privacy leak. Potentially it's even a way for a MITM to exploit the HTTP stack, some content parser or the application's own handling. TLS stacks are a significantly harder target in comparison.

> Potentially it's even a way for a MITM to exploit the HTTP stack, some content parser or the application's own handling. TLS stacks are a significantly harder target in comparison.

For signed payloads there is no difference, you're trusting <client>'s authentication code to read a blob, a signature and validate it according to a public key. For package managers that usually only mean trusting gpg - at the very least no less trustworthy than the many TLS and HTTP libraries out there.

> For signed payloads there is no difference, you're trusting <client>'s authentication code to read a blob, a signature and validate it according to a public key.

Assuming this all came through unencrypted HTTP:

- you're also trusting that the client's HTTP stack is parsing HTTP content correctly

- for that matter, you're also trusting that the server (and any man-in-the-middle) is generating valid HTTP responses

- you're also trusting that the client's response parser doesn't have a vulnerability (and not, say, ignoring some "missing closing bracket" or something)

- you're also trusting that the client is parsing the correct signature (and not, say, some other signature that was tacked-on later)

It's trivially easy to disassemble software to find vulnerabilities like those, though. So it's a lot of trust given for an untrusted software stack.

> you're also trusting that the client's HTTP stack is parsing HTTP content correctly

This is an improvement: HTTP/1.1 alone is a trivial protocol, whereas the alternative is trusting the client's much more complicated TLS stack and its HTTP stack.

For technical reasons, unencrypted HTTP is also always the simpler (and for bulk transfers more performant) HTTP/1.1 in practice as standard HTTP/2 dictates TLS with the special non-TLS variant ("h2c") not being as commonly supported.

> for that matter, you're also trusting that the server (and any man-in-the-middle) is generating valid HTTP responses

You don't, just like you don't trust a TLS server to generate valid TLS (and tunneled HTTP) messages.

> you're also trusting that the client's response parser doesn't have a vulnerability (and not, say, ignoring some "missing closing bracket" or something)

You don't. Authentication 101 (which also applies to how TLS works), authenticity is always validated before inspecting or interacting with content. Same rules that TLS needs to follow when it authenticates its own messages.

Furthermore, TLS does nothing to protect you against a server delivering malicious files (e.g., a rogue maintainer or mirror intentionally giving you borked files).

> you're also trusting that the client is parsing the correct signature (and not, say, some other signature that was tacked-on later)

You don't, as the signature must be authentic from a trusted author (the specific maintainer of the specific package for example). The server or attacker is unable to craft valid signatures, so something "tacked-on" just gets rejected as invalid - just like if you mess with a TLS message.

> It's trivially easy to disassemble software to find vulnerabilities like those, though. So it's a lot of trust given for an untrusted software stack.

The basis of your trust is invalid and misplaced: Not only is TLS not providing additional security here, TLS is the more complex, fragile and historically vulnerable beast.

The only non-privacy risk of using non-TLS mirrors is that a MITM could keep serving you an old version of all your mirrors (which is valid and signed by the maintainers), withholding an update without you knowing. But, such MITM can also just fail your connection to a TLS mirror and then you also can't update, so no: it's just privacy.

> HTTP/1.1 alone is a trivial protocol

Eh? CWE-444 would beg to differ: https://cwe.mitre.org/data/definitions/444.html

https://http1mustdie.com/

> the alternative is trusting the client's much more complicated TLS stack and its HTTP stack.

An attacker doesn't get to attack client's HTTP stack without first piercing protection offered by TLS.

If you don't trust the http client to not do something stupid, this all applies for https, too. Plus, they can also bork on the ssl verification phase, or skip it altogether.

TLS stacks are generally significantly harder targets than HTTP ones. It's absolutely possible to use one incorrectly, but then we should also count all the ways you can misuse a HTTP, there are a lot more of those.

There's a massive difference. The entire HTTP stack comes into play before whatever blob is processed. GPG is notoriously shitty at verifying signatures correctly. Only with the latest Apt there's some hope that Sequoia isn't as vulnerable.

In comparison, even OpenSSL is a really difficult target, it'd be massive news if you succeed. Not so much for GPG. There are even verified TLS implementations if you want to go that far. PGP implementations barely compare.

Fundamentally TLS is also tremendously more trustworthy (formally!) than anything PGP. There is no good reason to keep exposing it all to potential middlemen except just TLS. There have been real bugs with captive portals unintentionally causing issues for Apt. It's such an _unnecessary_ risk.

TLS leaves any MITM very little to play with in comparison.

Doesn't this break CRL fetching and OCSP queries?

Nothing really cares except like Prusa Connect.

AFAIK a lot of linux packet repositories are http-only as well. Convenient for tracking what package versions have been installed on a certain system.

They usually support both, but important to note that HTTPS is only used for privacy.

Package managers generally enforce authenticity through signed indexes and (directly or indirectly) signed packages, although be skeptical when dealing with new/minor package managers as they could have gotten this wrong.

Reducing the benefit of HTTPS to only privacy is dishonest. The difference in attack surface exposed to a MITM is drastic, TLS leaves so little available for any attacker to play with.