The idea that an "observability stack" is going to replace shell access on a server does not resonate with me at all. The metrics I monitor with prometheus and grafana are useful, vital even, but they are always fighting the last war. What I need are tools for when the unknown happens.
The tool that manages all my tools is the shell. It is where I attach a debugger, it is where I install iotop and use it for the first time. It is where I cat out mysterious /proc and /sys values to discover exotic things about cgroups I only learned about 5 minutes prior in obscure system documentation. Take it away and you are left with a server that is resilient against things you have seen before but lacks the tools to deal with the future.
>It is where I attach a debugger, it is where I install iotop and use it for the first time. It is where I cat out mysterious /proc and /sys values to discover exotic things about cgroups I only learned about 5 minutes prior in obscure system documentation.
It is, SSH is indeed the tool for that, but that's because until recently we did not have better tools and interfaces.
Once you try newer tools, you don't want to go back.
Here's the example of my fairly recent debug session:
You don't need debugging facilities for many issues. You need observability and tracing.Instead of debugging the issue for tens of minutes at least, I just used observability tool which showed me the path in 2 minutes.
See I would not reboot the server first before figuring out what is happening. You lose a lot of info by doing that and the worst thing that can happen is that the problem goes away for a little bit.
To be fair, turning it off and on again is unreasonably effective.
I recently diagnosed and fixed an issue with Veeam backups that suddenly stopped working part way through the usual window and stopped working from that point on. This particular setup has three sites (prod, my home and DR), and five backup proxies. Anyway, I read logs and Googled somewhat. I rebooted the backup server - no joy, even though it looked like the issue was there. I restarted the proxies and things started working again.
The error was basically: there are no available proxies, even though they were all available (but not working but not giving off "not working" vibes).
I could bother with trying to look for what went wrong but life is too short. This is the first time that pattern has happened to me (I'll note it down mentally and it was logged in our incident log).
So, OK, I'll agree that a reboot should not generally be the first option. Whilst sciencing it or nerding harder is the purist approach, often a cheeky reboot gets the job done. However, do be aware that a Windows box will often decide to install updates if you are not careful 8)
My job as a DevOps engineer is to ensure customer uptime. If rebooting is the fastest, we do that. Figuring out the why is the primary developers’ jobs.
This is also a good reason to log everything all the time in a human readable way. You can get services up and then triage at your own pace after.
My job may be different than other’s as I work at an ITSP and we serve business phone lines. When business phones do not work it is immediately clear to our customers. We have to get them back up not just for their business but for the ability for them to dial 911.
most failstates arent worth preserving in a SMB environment. In larger environments or ones equipped for it a snapshot can be taken before rebooting- should the issue repeat.
Once is chance, twice is coincidence, three times makes a pattern.
Alternatively, if it doesn't happen again it's not worth fixing, if it does happen again then you can investigate it when it happens again.
I've debugged so many issues in my life that sometimes I'd prefer things to just work, and if reboot helps to at least postpone the problem, I'd choose that :D
I fail to understand how your approach is different to your parent.
perf is a shell tool. iptables is a shell tool. sshguard is a log reader and ultimately you will use the CLI to take action.
If you are advocating newer tools, look into nft - iptables is sooo last decade 8) I've used the lot: ipfw, ipchains, iptables and nftables. You might also try fail2ban - it is still worthwhile even in the age of the massively distributed botnet, and covers more than just ssh.
I also recommend a VPN and not exposing ssh to the wild.
Finally, 13,000 address in an ipset is nothing particularly special these days. I hope sshguard is making a properly optimised ipset table and that you running appropriate hardware.
My home router is a pfSense jobbie running on a rather elderly APU4 based box and it has over 200,000 IPs in its pfBlocker-NG IP block tables and about 150,000 records in its DNS tables.
>perf is a shell tool. iptables is a shell tool. sshguard is a log reader and ultimately you will use the CLI to take action.
Well yes, and to be honest in this case I did that all over SSH: run `perf`, generate flame graph, copy the .svg to the PC over SFTP, open it in the file viewer.
What I really wanted is a web interface which will just show me EVERYTHING it knows about the system in a form of charts, graphs, so I can just skim through it and check if everything allright visually, without using the shell and each individual command.
Take a look at Netflix presentation, especially on their web interface screenshots: https://archives.kernel-recipes.org/wp-content/uploads/2025/...
>look into nft - iptables is sooo last decade
It doesn't matter in this context: iptables is using new netfilter (I'm not using iptables-legacy), and this exact scenario is 100% possible with native netfilter nft.
>Finally, 13,000 address in an ipset is nothing particularly special these days
Oh, the other day I had just 70 `iptables -m set --match-set` rules, and did you know how apparently inefficient source/destination address hashing algorithm for the set match is?! It was debugged with perf as well, but I wish I just had it as a dashboard picture from the start.
I'm talking about ~4Gbit/s sudden limitation on a 10Gbit link.
"What I really wanted is a web interface which will just show me EVERYTHING it knows about the system in a form of charts, graphs, so I can just skim through it and check if everything allright visually, without using the shell and each individual command."
Yes, we all want that. I've been running monitoring systems for over 30 years and it is quite a tricky thing to get right. .1.3.1.4.1.33230 is my company enterprise number, which I registered a while back.
The thing is that even though we are now in 2026, monitoring is still a hard problem. There are, however, lots of tools - way more than we had in the day but just like a saw can rip your finger off instead of cutting a piece of wood, well I'm sure you can fill in the blanks.
Back in the day we had a thing called Ethereal which was OK and nearly got buried. However you needed some impressive hardware to use it. Wireshark is a modern marvel and we all have decent hardware. SNMP is still relevant too.
Although we have stonking hardware these days, you do also have to be aware of the effects of "watching". All those stats have to be gathered and stashed somewhere and be analysed etc. That requires some effort from the system that you are trying to watch. That's why things like snmp and RRD were invented.
Anyway, it is 2026 and IT is still properly hard (as it damn well should be)!
>Oh, the other day I had just 70 `iptables -m set --match-set` rules, and did you know how apparently inefficient source/destination address hashing algorithm for the set match is?! It was debugged with perf as well!
>I'm talking about ~4Gbit/s sudden limitation on a 10Gbit link.
I think you need to look into things if 70 IPs in a table are causing issues, such that a 10Gb link ends up at four Gb/s. I presume that if you remove the ipset, that 10Gb/s is restored?
Testing throughput and latency is also quite a challenge - how do you do it?
How did you use tracing to check the current state of a machine’s iptables rules?
In this case I used `perf` utility, but only because the server does not have a proper observability tool.
Take a look at this Netflix presentation, especially on the screenshots of their web interface tool: https://archives.kernel-recipes.org/wp-content/uploads/2025/...
That is a command line tool run over ssh. If you have invented a new way to run command line tools, that’s great (and very possible, writing a service that can fork+exec and map stdio), but it is the equivalent to using ssh. You cannot run commands using traces.
With that mindset anything is equivalent to ssh. The command line is not the pinnacle of user interfaces and giving admins full control of the machine isn't the pinnacle of security either.
We need to accept that UNIX did not get things right decades ago and be willing to evolve UX and security to a better place.
Happy to try an alternative. Traces I have tried, and it is not an alternative.
Or… you build a container, that runs exactly what you specify. You print your logs, traces, metrics home so you can capture those stack traces and error messages so you can fix it and make another container to deploy.
You’ll never attach a debugger in production. Not going to happen. Shell into what? Your container died when it errored out and was restarted as a fresh state. Any “Sherlock Holmes” work would be met with a clean room. We have 10,000 nodes in the cluster - which one are you going to ssh into to find your container to attach a shell to it to somehow attach a debugger?
> We have 10,000 nodes in the cluster - which one are you going to ssh into to find your container to attach a shell to it to somehow attach a debugger?
You would connect to any of the nodes having the problem.
I've worked both ways; IMHO, it's a lot faster to get to understanding in systems where you can inspect and change the system as it runs than in systems where you have to iterate through adding logs and trying to reproduce somewhere else where you can use interactive tools.
My work environment changed from an Erlang system where you can inspect and change almost everything at runtime to a Rust system in containers where I can't change anything and can hardly inspect the system. It's so much harder.
Say you are debugging a memory leak in your own code that only shows up in production. How do you propose to do that without direct access to a production container that is exhibiting the problem, especially if you want to start doing things like strace?
I will say that, with very few exceptions, this is how a lot of $BigCo manage everyday. When I run into an issue like this, I will do a few things:
- Rollback/investigate the changelog between the current and prior version to see which code paths are relevant
- Use our observability infra that is equivalent to `perf`, but samples ~everything, all the time, again to see which codepaths are relevant
- Potentially try to push additional logging or instrumentation
- Try to better repro in a non-prod/test env where I can do more aggressive forms of investigation (debugger, sanitizer, etc.) but where I'm not running on production data
I certainly can't strace or run raw CLI commands on a host in production.
Combined with stack traces of the events, this is the way.
If you have a memory leak, wrap the suspect code in more instrumentation. Write unit tests that exercise that suspect code. Load test that suspect code. Fix that suspect code.
I’ll also add that while I build clusters and throw away the ssh keys, there are still ways to gain access to a specific container to view the raw logs and execute commands but like all container environments, it’s ephemeral. There’s spice access.
>What I need are tools for when the unknown happens.
There are tools which show what happens per process/thread and inside the kernel. Profiling and tracing.
Check Yandex's Perforator, Google Perfetto. Netflix also has one, forgot the name.
I guess the question is why your observability stack isn't exposing proc and sys for you.
Mine (prometheus) doesn’t because there are a lot of high-dimensional values to track in /proc and /sys that would blow out storage on a time-series database. Even if they did though, they could not let me actively inject changes to a cgroup. What do you suggest I try that does?
Experience from another company where I (and you) worked suggests that having the endpoints to expose the system metrics, without actually collecting and storing them, is the way to go.
Because you're holding it wrong!
The dashboards are something that looks cool, but they usually are not really helpful for debugging. What you're looking for is per-request tracing and logging, so you can grab a request ID and trace it (get log messages associated with it) through multiple levels of the stack. Even maybe across different services.
Debuggers are great, but they are not a good option for production traffic.
Agreed, this sounds like some complicated ass-backwards way to do what k8s already does. If it's too big for you, just use k3s or k0s and you will still benefit from the absolutely massive ecosystem.
But instead we go with multiple moving parts all configured independently? CoreOS, Terraform and a dependence on Vultr thing. Lol.
Never in a million years I would think it's a good idea to disable SSH access. Like why? Keys and non-standard port already bring China login attempts to like 0 a year.