This feels a little weird because while they are running the website itself (HTTP) off the Pi, they are handing off all TLS to a cloud provider.
So while the content is in RAM on the Pi, a lot of the heavier lifting (TLS termination) is done elsewhere, which saves a ton of CPU load on the Pi.
Yeah, I've seen this in more than a few places. There was a blog "running on a Wii" that, IIRC, was doing the same thing.
On the one hand I get it, TLS is pretty heavy, and it makes sense to take advantage of a VPS or Cloudflare or however you want to do it.
But once you are spinning up a VPS, the question is ... why the Pi? The VPS in the article has less RAM, but more storage. If you're already doing TLS termination on the VPS (the most RAM intensive part), you might as well just do the whole shebang there.
I know this is all for fun, I'm just wondering -- is the Pi Zero really too slow to handle TLS, especially with an optimized TLS library? In this setup, the Pi is already being directly exposed to the Internet anyway, there's no VPN being used. That ARM11 isn't "fast", but surely a 1 GHz ARM11 can handle an optimized TLS library serving some subset of TLS1.2.
The TLS termination isn't actually on the VPS. The article details that Tierhive has an haproxy edge service (handling the TLS), that then has the vps as the backend, but that vps is just doing tcp proxying with socat to the ddns exposed home server fqdn. Feels like a lot of unnecessary loops. Kinda fun I guess but, just, why
Not disagreeing with you, but that makes it even worse.
Agreed
Yes it is, "we plan to use our external VPS for handling the TLS termination". Edit: Ah I see you are just pointing out termination is on haproxy service not VPS. Thought you were implying it was terminating on pi, my apologies.
The VPS is running socat only and just doing tcp forwarding. There is a shared haproxy also run by their same host, sitting in front of the VPS and is handling the TLS. I encourage you to read the article fully. They probably should have said "VPS provider" instead of VPS for the TLS bit.
But it's plain text like you said in another comment after the haproxy, so two more plain text paths (with at least one going through the internet (vps->pi), not sure if haproxy->VPS is internal to the provider network (maybe)), so not ideal in my book
This reminds me of the recent "running Doom on DNS" post which in actuality was "running Doom from DNS [as a storage device] on my PC" which is multitudes less impressive.
It reminds me of the footage of Doom running on a pregnancy test. And then it turned out it was another computer just displaying to the build in AMOLED display.
What was supposed to be a cool achievement is rendered pointless when one of the key elements is offloaded elsewhere.
Sometimes these demos enable caching on the reverse proxy. So then for these tiny demo html pages you request, you may not even reach the fun tiny computer it is supposed to demonstrate.
I wouldn’t consider “the way most people do TLS in 2026” weird. That said this isn’t all that impressive or interesting, a computer… serving a website.
Is sending plaintext traffic over the open Internet "the way most people do TLS in 2026"? Am I missing something from the post?
Many (most?) are hosting web applications and/or content in separate applications and sometimes servers from where TLS (HTTPS) termination happens. HAProxy, Traefik, Caddy and Nginx as reverse proxy and TLS termination servers are pretty common, even more so if you're containerizing your applications themselves. It dramatically simplifies the application stack.
While I may make the argument that most are probably hosting and doing php on the same server, it's not the typical approach for any custom software at this point.
It's vastly different to do TLS termination within your own network and to do it on a rando VPS and then send normal TCP over the internet. It's not an argument of it being on the same server.
The VPS is your security in this case. It's not sending plaintext over the internet, is it?
Edit: No, the article mentions listening on port 80 at home. I thought they'd be SSH tunneling or something. That is unusual, but I guess for a static website it doesn't really matter.
> That is unusual, but I guess for a static website it doesn't really matter.
It sorta does matter. Either the actual raspi does nothing of value or the traffic has value that should be protected.
Sure, I heard the argument that public HTTP traffic does not need encryption but if it is of any value then both parties have a interest in it unmanipulated, uncenscored, validated or all of the before. Even if it is just preventing the ISP injecting dumb ads.
Yeah that's a valid concern. Idk, nothing about this setup makes sense.
Considering that a 'base' raspbian type install can be something like 160MB of RAM used with openssh running and a lot of other launched-from-systemd daemons in the background, that leaves plenty of RAM available for a stock apache2 or nginx setup with TLS. No it won't be able to serve a ton of simultaneous requests, but I'm in agreement with the other comments here that doing purely port 80/http and putting it behind a secondary TLS proxy is not really "serving the website" from the raspberry pi.
Hmm is linux on arm so frugal still?
Don't have a Pi plugged in now to check, but I have a fresh x86_64 linux and it's using like 600 Mb of ram - server install, I only got around to installing and configuring openssh and samba so far.
Oh and it's Devuan, so not even systemd to blame. I think it was close to 1 G with a systemd distro.
It is more than a little weird. A pi zero is more than capable of handling HTTP/1.2 and TLS 1.3 for a handful of connections per second. This machine is 10x what we were running web servers on in the '90s.
Also, all web pages are served from RAM. It's automatic that modern OSes will cache this stuff on first access.
I was initially thinking does the pi zero even have floating point? But then I remembered that encryption only uses integer operations, not even very large ones.
Yeah, I ran a phpbb forum (alongside my normal static site) on a 486 in 2003 or so. It worked. It was slow, but it worked just fine for my friends and I! I remember it took multiple minutes to generate the SSH server key after the initial install lol
A 486 in 2003? Pentiums were shipping by the mid-90s, did you just have super old hardware lying around?
I retired my 486 in ‘95 or thereabouts…
In the mid-90s, I retired my 486 hardware and brought it over to a local ISP that we were friends with.
It had a second life doing stuff like delivering mail, handling IRC, serving web pages, and whatever else a few of us wanted from it. The performance was fine.
(The Pentium-ish machines stayed on desktop duty where GUIs devoured resources.)
Anyone remember 32 bit/33 MHz PCI slot SSL accelerator cards? As I recall openbsd had kernel driver support for several
Yes. I still have one. I used it in a Soekris 486-class machine running OpenBSD for many, many years. I stopped using the Soekris when they dropped driver support for the accelerator. I think i386 support might be gone now too. Cool little card; I think I might still have it in a drawer somewhere.
In unrelated news: I think the Magic-1 is still online. ( https://www.magic-1.org be gentle with it)
>This machine is 10x what we were running web servers on in the '90s.
Kind of irrelevant since operating systems and web pages in the 90's were significantly smaller in footprints, as the web was mostly plain text back then. Windows XP with its GUI would run Max Payne on 128MB of RAM. You could do a lot more back then that You can't do modern stuff like that today with 128MB of RAM.
You can host such sites perfectly well nowadays. I’ve often served hand-written HTML pages of only few lines
LLMs, including open ones, are really good at this it turns out. It stands to reason, there is tons of training material out there no doubt they have consumed and are ready to regurgitate.
Yesterday I one-shotted several interactive pages, that Qwen built out of straight HTML and Javascript. I handed it my API (source code, not even a swagger, via an MCP that Qwen wrote for me), asked for a frontend, and it delivered. One page at a time to keep context down, and mightve gotten lucky on the first draw but after the first one I told it to make the next ones like the first.
Can't say I've had that experience with backend languages & frameworks, incl writing that same API, but perhaps I'm off the beaten path with those, or perhaps there's greater breadth of things to do vs a narrower set of acceptance criteria? IDK.
Here I was sweating that I'd have to research and learn a current-day frontend framework. It felt like a magic wand using consumer-grade AI. HTML and plain old Javascript was plenty.
Tangent but apropos of other contemporary threads on HN, it puts a spin on supply chain threats. There's no NPM or anything, except perhaps whatever mysteries are baked into the model.
The contents of webpages are largely the same.
HTML code, CSS, Javascript, Images.
In this case, they are static elements, which can even be cached locally to share more easily.
If someone wants a massive build system to render a static HTML page, that's on them, and their personal interpretation. Increasingly, and maybe more often than not, there is more than one way to get the same outcome.
The fact that there's hundreds of downloads for a single web page is up to the constructor of that page. Still, these things can be reasonably cached. For example, host it on the Pi, then put a cloudflare in front of it or something.
The Pi Zero might not be for you, or easy to try to undermine. Which criticisms would go away if it was on a regular pi?
Even then... it's usually built before it's deployed n the server.. the server is still delivering text, css, js, images and images have always been pretty large. So your connection is tied up for a little bit longer... and as content was smaller in the 90's, connections themselves are much faster today... in the 90's you were lucky to be hosting on a T1 or faster and clients on modems. Today, you've likely got between 100mb to 2gb uplink on your home connections, let alone business connections that generally start at 1gb. 600x the bandwidth for the server from a T1
>Which criticisms would go away if it was on a regular pi?
Maybe you misunderstood. Which criticism did I make of the pi zero? I criticized present day SW.
Why even serve TLS here?
because we collectively seem to have decided that plaintext http is worse than.. the computational cost to the planet.