I get the whole x CPUs y RAM story (it's akin to how clouds sell) and that often makes sense in the cloud, but when managing my datacenter operations a big constraining point is compute / kW. At 30 A / 208 V at 85% efficiency I've got 5 kW to work with per cabinet. If I'm putting in low core density slow machines I've got to do a lot more management than if I'm using my Epyc 9755 based servers. This is a practical constraint not just a theoretical "oh I want the latest and greatest". It's just that I can't really justify using up 4U and a kW on an Epyc 7003 series. The compute density just isn't there for the power use. The old chips are practically deadweight.
Anyway, I'm glad to hear of the raise because the team seems exceptional (judging by the posts you guys write and they have written prior to the company) and I love work in this area that simplifies hardware management. Good stuff, good luck, and congratulations!
> 5 kW to work with per cabinet
have my expectations been shot by reading too much about Nvidia's latest and greatest, or does this seem quite low?
Having worked in the space, 6kVA is the norm from 10-15 years ago, 12kVA is the standard for regular compute workloads. With HPC/AI all bets are off though.
No, your expectations are not wrong. I'm a small business. A fully stacked AI/GPU cabinet is multiples of this. A single GH200 based server will have 2x2.7 kW power supplies in a 1U form factor. As you can imagine, I am not running a cabinet full of such servers. But you don't need AI power requirements to do normal software. And there's lots of normal software to do!
That’s standard practice in many data centers; but you can often pay more to get more amps delivered to your rack.