I’m not sure if that makes it better or worse.

It seems realistic to me, commonplace even. Lots to do in a company like this one.

I didn't know what Render was when I skimmed the article at first, but after reading these comments, I had to check out what they do.

And they're a "Cloud Application Platform" meaning they manage deploys and infrastructure for other people. Their website says "Click, click, done." which is cool and quick and all, but to me it's kind of crazy an organization that should be really engineering focused and mature, doesn't immediately notice 1.2TB being used and tries to figure out why, when 120GB ended up being sufficient.

It gives much more of a "We're a startup, we're learning as we're running" vibe which again, cool and all, but hardly what people should use for hosting their own stuff on.

7tb in an organization running probably petabytes of ram total is easy to slip under the radar. There's a lot of systems and a lot of moving parts and if it's not broke or triggering alarms, you probably don't care very much.

If your report for the month is "I saved a terabyte of ram usage across our cluster estate!" and I as a manager do some quick maths and say great, that's our income from 2 median customers. We lost 8 customers because we didn't laugh feature foo in time, which is what you were supposed to be working on, so your contribution for the month is a massive loss to the company...

Does that frame things differently? There's are times in your product lifecycle where you doing want your developers looking at things like this, and a time when you do