No firewall! Wow that's brave. Hetzner will let you configure one that runs outside of the box so you might want to add that too, as part of your defense in depth - that will cover you if you make a mistake with ufw. Personally I keep SSH firewalled only to my home address in this way; if I'm out and about and need access, I can just log into Hetzner's website and change it temporarily.
Firewalls in the majority of cases don't get you much. Yes it's a last line of defense if you do something really stupid and don't even know where or what you configure your services to listen on, but if you don't the difference between running firewalls and not is minuscule.
There are way more important things like actually knowing that you are running software with widely known RCE that don't even use established mechanisms to sandbox themselves it seems.
The way the author describes docker being the savior appears to be sheer luck.
The author mentioned they had other services exposed to the internet (Postgres, RabbitMQ) which increases their attack surface area. There may be vulnerabilities or misconfigurations in those services for example.
Good security is layered.
But if they have to be exposed then a firewall won't help, and if they don't have to be exposed to the internet then a firewall isn't needed either, just configure them not to listen on non-local interfaces.
This sounds like an extremely effective foot gun.
Just use a firewall.
I'm not sure what you mean, what sounds dangerous to me is not caring about what services are listening to on a server.
The firewall is there as a safeguard in case a service is temporarily misconfigured, it should certainly not be the only thing standing between your services and the internet.
A firewall is a safeguard, period. Like the firewall between the driver and engine in a car.
If you're at a point where you are exposing services to the internet but you don't know what you're doing you need to stop. Choosing what interface to listen on is one of the first configuration options in pretty much everything, if you're putting in 0.0.0.0 because that's what you read on some random blogspam "tutorial" then you are nowhere near qualified to have a machine exposed to the internet.
Don't do anything until you are an expert is excellent gatekeeping, fortunately this is hacker news so we can ignore the gatekeepers!
I suggest people fuck around and find out, just limit your exposure. Spin up a VPS with nothing important, have fun, and delete it.
At some point we are all unqualified to use the internet and we used it anyway.
No one is going to die because your toy project got hacked and you are out $5 in credits, you probably learned a ton in the process.
Absolutely. Thank you.
extremely loud incorrect buzzer noise, what are you going to say next "bastion servers are a scam"
But the firewall wouldn't have saved them if they're running a public web service or need to interact with external services.
I guess you can have the appserver fully firewalled and have another bastion host acting as an HTTP proxy, both for inbound as well as outbound connections. But it's not trivial to set up especially for the outbound scenario.
No you're right, I didn't mean the firewall would have saved them, but just as a general point of advice. And yes a second VPS running opnSense or similar makes a nice cheap proxy and then you can firewall off the main server completely. Although that wouldn't have saved them either - they'd still need to forward HTTP/S to the main box.
A firewall blocking outgoing connections (except those whitelisted through the proxy) would’ve likely prevented the download of the malware (as it’s usually done by using the RCE to call a curl/wget command rather than uploading the binary through the RCE) and/or its connection to the mining server.
How many people do proper egress filtering though, even when running a firewall
In practice, this is basically impossible to implement. As a user behind a firewall you normally expect to be able to open connections with any remote host.
Not impossible at all with a policy-filtering HTTPS proxy. See https://laurikari.github.io/exfilguard/
In this model, hosts don’t need any direct internet connectivity or access to public DNS. All outbound traffic is forced through the proxy, giving you full control over where each host is allowed to connect.
It’s not painless: you must maintain a whitelist of allowed URLs and HTTP methods, distribute a trusted CA certificate, and ensure all software is configured to use the proxy.
The only time I have ever had a machine compromised in 30 years of running Linux is when I ran something exposed to the internet on a well known port.
I know port scanners are a thing but the act of using non-default ports seems unreasonably effective at preventing most security problems.
This is very, very, very bad advice. A non-standard port is not a defence. It’s not even slightly a defence.
Did I at any point in my previous comment say that using non-standard ports was my only line of defence ?
Its security through obscurity, which puts you out of view of the vast majority of the chaos of the internet. It by no means protects you from all threats.
Correct. From what I understand, Shodan has had for years a search feature in their paid plans to query for "service X listening on non-standard port". The only sane assumption is that any half-decent internet-census[tm] tool has the same as standard by now.
If you do any npm install, pip install ..., docker pull ... / docker run ... , etc in linux. It is very easy to get compromise.
I did docker pull a few times base on some webpost (looks reasonable) and detect app/scripts from inside the docker connect to some .ru sites immediately or a few days later....
I do this too, but I think it should only be a defense in depth thing, you still need the other measures.
Password auth being enabled is also very brave. I don’t think fail2ban is necessary personally, but it’s popular enough that it always come up.
I don't whitelist IPs for ssh anymore, but I always run sshd on randomly selected port, in order to not get noticed by port scanners.
I do it for a really long time already, and until now I am not sure if it has any benefit or it's just umbrella in a sideways storm.
As long as you understand it's security by obscurity, rather than by cryptography.
I don't think it's wrong, it's just not the same as eg using a yubikey.
This won't hide you completely, but it will reduce log spam.
My sshd only listens on the VPN interface
I have SSH blocked altogether and use wireguard to access the server. If something goes wrong I can always go to the dashboard and reenable SSH for my IP. But ultimately your setup is just as secure. Perhaps a tiny bit less convenient.
For the record this is only available for their VPS offering and not dedis. If you rent a dedi through their server auction you still need to configure your own firewall.
Dedicated servers can configure external firewalls too; there's a tab for it on the server config. It's basic but functional.
Yup. All my servers are behind Tailscale. The only thing I expose is a load balancer that routes tcp (email) and http. That balancer is running docker, fully firewalled (incl docker bypasses). Every server is behind herzner’s firewall in addition to the internal firewall.
App servers run docker, with images that run a single executable (no os, no shell), strict cpu and memory limits. Most of my apps only require very limited temporary storage so usually no need to mount anything. So good luck executing anything in there.
I used, way back in the day, to run Wordpress sites. Would get hacked monthly every possible way. Learned so much, including the fact that often your app is your threat. With Wordpress, every plugin is a vector. Also the ability to easily hop into an instance and rewrite running code (looking at you scripting languages incl JS) is terrible. This motivated my move to Go. The code I compiled is what will run. Period.
Honestly fail2ban is amazing. I might doa write up on the countless of attempts on my servers.
The only way I've envisioned fail2ban to be of any use at all is if you gather IPs from one server and use them on your whole fleet and I got it running like this for a while. Ultimately I decided that all it does is give you a cleaner log file since by definition its working on logs for attacks/attempts that did not succeed. We need to stop worrying about attempts we see in the logs and let software do its job.