The existence of starlink proves that this is false. Look at most current pitches, they don’t talk about GW-class monsters anymore. There’s absolutely nothing stopping a 20-30kW satellite bus the size of starlink (or I guess up to 100kW? once starship is available - it’s all about payload fairing diameter) from hosting ~1 rack of compute and antennas. The economics may or may not make sense, we’ll have to see.
There’s very little research work needed to make this happen; it’s all about engineering some satellite buses and having them fly in close formation to get a “data center”. And this group of satellites in sun-synchronous orbit would relay to a comms constellation e.g. starlink itself) and operate as a global scale data center. The heat management and orbital mechanics are all straight forward really.
I've heard this before. A datacenter and a starlink sattelite are not in the same ballpark of power usage and heat dissipation needs. The are orders of magnitude off from each other.
The point is that you don’t need to put a whole datacenter into a single satellite. You can put a single rack per satellite and have different racks communicate via antennas, laser links, or perhaps even wires since they’ll be launched in groups of 10-50 anyway. You could also dock them to each other, but that’s not necessarily needed.
I don't understand what makes these "datacenters" if they're distributed across satellites with WAN-esque interconnect.
Are we overloading the term "datacenter"? Or is it not overloaded but somehow able to achieve datacenter-like speeds / (tail) latency even when distributed across satellites?
Ok, but then what's the point? How is having a small amount of compute in space useful?
It's worth noting that GPUs have a much higher failure rate than traditional CPUs. Over 10x the failure rate due thermal stress. The amount of heat generated is very different. You can't really replace a GPU in a satellite (at least today?) which would place most of these satellites as space debris in a ~5 year horizon.
Usually satellites utilize an older node as newer nodes are easily bit-flipped by radiation. And blocking radiation is heavy.
AI calcs may handle wrong calculations better than cpus where software will tend to panic.
Which is the same lifetime as a starlink sat
So what exactly is the benefit of having that thing in orbit then, where it costs you millions of dollars to put it there?
The current bottleneck on compute is power and zoning. Solar panels are 5x more efficient in space, and there is no zoning in space.
The current bottleneck is silicon. Every chip that is manufactured gets housed and powered. (It makes sense: the cost of compute is dominated by capex, the power costs are irrelevant, so they're ok paying a premium for power).
The space data center hypothesis relies on compute supply growing faster than power supply. (Both are bottlenecked on parts of the supply chain that will take ages to scale.)
Even if you believe that's the case, the point at which orbital data centers start making sense is incredibly sensitive to the exact growth rates.
The current bottleneck is not silicon. There is plenty of silicon locked up in previous gen GPUs that are no longer efficient enough to run relative to newer models. The bottleneck is the economics of owning the older GPU models - which is why all the GPU neoclouds are gonna go bust unless they can get customers to continue renting old GPUs.
The economics are vastly different when opex is near zero for these things
All of that is incorrect.
H100 rental prices are still as high as when the cards were brand new. The prices vastly exceed the power costs.
In a world where power or DC permits are the current bottleneck those H100s would be getting retired in favor of Blackwells. But they aren't. They are instead being locked in for years long contracts.
Why exactly would the H100s get retired for Blackwells if specifically power and DC permits were the bottleneck?
Because they are >10x more power efficient.
If silicon were relatively abundant and power/DC space scarce, you'd get an order of magnitude more bang for the Watt by replacing the H100s with newer GPUs.
But nobody is doing that. Blackwells are being installed as additional capacity, not Hopper replacements.
So it is pretty clear that silicon is the primary bottleneck.
Because you'd need to trash the old GPUs in order to make room for new GPUs. Right now new GPUs get online mostly in new DCs. TSMC fab capacity is much more limiting than DC building and it will likely keep being the case. It's much easier to build a DC than a fab.
Millions of dollars? Where did you get that number from?
...how much do you think each rocket launch costs?
Not millions of dollars per sat. Are you being intentionally obtuse?
Are you intentionally misreading what I'm saying?
Self destruction is a feature, not a bug.
That said eventually they can be lifted to higher orbits and have robots deliver and swap updated compute (if not made in space itself!).
"Space datacenter" -> overpriced starlink with some shitty edge compute -> "look guys, we built a space datacenter; earnings results to follow" -> number go up.
How much power could we get out of the fuel required to launch a 20-100kW rack in to space, if we were to burn it on the ground?
I don’t think Sun synchronous orbit is possible except in LEO.
LEO is high risk and star link satellites deorbit or burn up all the time. Not good from a capex POV on graphics cards.