Well, I am running the stack in production right now, but everyone has a different understanding of what that actually means...
Do you have concrete examples of these catastrophic failures? I've personally havent experienced any myself during these years, but I'm doing very boring and typical stuff, so wouldn't surprise me there was hard edges still.
There's a difficult distinction here, you're right.
Technically even a single server running LAMP as root but taking frontend traffic meets the definition of in production but I think we all recognise that it's not the right idea.
What I'm referring to is: should the disk start to have issues: what does prometheus do? If the scrapers start to stall due to connection timeouts: what does prometheus do? If you are doing linear interpolation of data and you have massive gaps because you're polling opportunisitically: what does prometheus do.
I'm all about boring technology, but prometheus assumes too much happy path. It assumes that a single node is enough for time series data that is used for alerting.
Which, it is: at very small scale and with best effort reliability.
It's not acceptable as soon as lost data could be critically important in diagnosing major issues in billing systems, or actually billing users, or to infer issues that need to be correlated across multiple systems.
> should the disk start to have issues
If that happens, is prometheus really the biggest of your worries here? Software breaks left and right when disks disappear from under them, I'm not sure this is neither unexpected or unique to prometheus.
> If the scrapers start to stall due to connection timeouts: what does prometheus do?
I'm having this "issue" all the time, as some of my WiFi connected (less important) cameras are just within the WiFi range, and I'm using prometheus to scrape metrics from them. It seems like the requests times out, then the next time it doesn't, and everything just works? What's the issue you're experiencing with this exactly?
> It's not acceptable as soon as lost data could be critically important in diagnosing major issues in billing systems, or actually billing users, or
Wait what? Billing systems? That stuff would go into your proper database, wouldn't it? Sure, if prometheus/node_exporter fails or whatever, you won't get metrics out of the host, but again, if those things start failing on that host, the host is having bigger issues than "prometheus suck at scale".
I was eagerly awaiting to be educated about potential gaps in my understanding of prometheus, instead it seems like you simply don't happen to like they way they do things? I was under the impression they did something wrong or something was broken, but these things just seems like the typical stuff you have to think about for any service you deploy.
Yes, my monitoring system not alerting me when the systems it runs on are failing is the entire problem.
That's not a general "software breaks when disks fail" situation: that's a monitoring system failing at its one job.
Your monitoring system failing silently when your infrastructure is under stress is precisely the failure mode that monitoring exists to prevent.
Zabbix solves this with native HA and self-checks. Prometheus makes it your problem to solve with external tooling, and most people don't, until they need it.
Why wouldn't your monitoring system alert you when metrics suddenly disappear? Sounds like you need a better monitoring system, prometheus is not gonna magically solve that problem for you. No wonder you were having issues with prometheus...
I'm not sure what you mean.
Of course the systems that have to alert me to failure have to be designed with mechanisms to alert me to the fact that they themselves are failing.
Zabbix, Nagios, Munin -- practically everything that existed before: understood this.
Prometheus doesn't because it optimised intentionally for being easy to deploy and for there being a hierarchy of prometheus's in a tree-like formation. Which makes sense, but forces a much more distributed and difficult to reason model.
Monitoring systems can't be designed for the happy path. By definition, they only matter when things are going wrong- which is precisely when the happy path isn't available. Prometheus is excellent when everything is fine (scaling aside). That's not when you need your monitoring system to be excellent.