Help me understand why not? I know solar power generation in space, and "beaming" the power back, was a naive idea. But this would actually use the power up there, mostly for training, but also for inference.
That claim seems reasonable. I have zero knowledge of the economics of launching and maintaining satellites though.
As I understand it, the problem is cooling. There isn't any medium to take away the heat, so the only option is to slowly radiate it away.
Which is apparently manageable. Scott Manley isn’t an industry veteran, but he does know a lot about space engineering and science. Here’s his breakdown of the feasibility, and heat management is not really a major issue:
https://youtu.be/DCto6UkBJoI
Heat management is not a technical issue but a reliability issue.
These satellites will be in orbits where they are always illuminated. That means constant temperatures, which means no thermal cycling and no reliability concerns.
When people say 'running it hot is bad for reliability', they mean 'running it hot and then brining it back to room temp from time to time will eventually kill it'.
It's in space which requires liquid cooling. No rocket is big enough so it has to be assembled on orbit. No liquid cooling terrestrial system is 100% leak free.
Anyone who has googled just once to ask if datacenters in space make any sense, has found out they don't because they can't get rid of heat.
That leaves only two kinds of people left who are still talking excitedly about datacenters in space: The uninformed and the grifters.
The existence of starlink proves that this is false. Look at most current pitches, they don’t talk about GW-class monsters anymore. There’s absolutely nothing stopping a 20-30kW satellite bus the size of starlink (or I guess up to 100kW? once starship is available - it’s all about payload fairing diameter) from hosting ~1 rack of compute and antennas. The economics may or may not make sense, we’ll have to see.
There’s very little research work needed to make this happen; it’s all about engineering some satellite buses and having them fly in close formation to get a “data center”. And this group of satellites in sun-synchronous orbit would relay to a comms constellation e.g. starlink itself) and operate as a global scale data center. The heat management and orbital mechanics are all straight forward really.
I've heard this before. A datacenter and a starlink sattelite are not in the same ballpark of power usage and heat dissipation needs. The are orders of magnitude off from each other.
The point is that you don’t need to put a whole datacenter into a single satellite. You can put a single rack per satellite and have different racks communicate via antennas, laser links, or perhaps even wires since they’ll be launched in groups of 10-50 anyway. You could also dock them to each other, but that’s not necessarily needed.
I don't understand what makes these "datacenters" if they're distributed across satellites with WAN-esque interconnect.
Are we overloading the term "datacenter"? Or is it not overloaded but somehow able to achieve datacenter-like speeds / (tail) latency even when distributed across satellites?
Ok, but then what's the point? How is having a small amount of compute in space useful?
It's worth noting that GPUs have a much higher failure rate than traditional CPUs. Over 10x the failure rate due thermal stress. The amount of heat generated is very different. You can't really replace a GPU in a satellite (at least today?) which would place most of these satellites as space debris in a ~5 year horizon.
Usually satellites utilize an older node as newer nodes are easily bit-flipped by radiation. And blocking radiation is heavy.
AI calcs may handle wrong calculations better than cpus where software will tend to panic.
Which is the same lifetime as a starlink sat
So what exactly is the benefit of having that thing in orbit then, where it costs you millions of dollars to put it there?
The current bottleneck on compute is power and zoning. Solar panels are 5x more efficient in space, and there is no zoning in space.
The current bottleneck is silicon. Every chip that is manufactured gets housed and powered. (It makes sense: the cost of compute is dominated by capex, the power costs are irrelevant, so they're ok paying a premium for power).
The space data center hypothesis relies on compute supply growing faster than power supply. (Both are bottlenecked on parts of the supply chain that will take ages to scale.)
Even if you believe that's the case, the point at which orbital data centers start making sense is incredibly sensitive to the exact growth rates.
The current bottleneck is not silicon. There is plenty of silicon locked up in previous gen GPUs that are no longer efficient enough to run relative to newer models. The bottleneck is the economics of owning the older GPU models - which is why all the GPU neoclouds are gonna go bust unless they can get customers to continue renting old GPUs.
The economics are vastly different when opex is near zero for these things
All of that is incorrect.
H100 rental prices are still as high as when the cards were brand new. The prices vastly exceed the power costs.
In a world where power or DC permits are the current bottleneck those H100s would be getting retired in favor of Blackwells. But they aren't. They are instead being locked in for years long contracts.
Why exactly would the H100s get retired for Blackwells if specifically power and DC permits were the bottleneck?
Because they are >10x more power efficient.
If silicon were relatively abundant and power/DC space scarce, you'd get an order of magnitude more bang for the Watt by replacing the H100s with newer GPUs.
But nobody is doing that. Blackwells are being installed as additional capacity, not Hopper replacements.
So it is pretty clear that silicon is the primary bottleneck.
Because you'd need to trash the old GPUs in order to make room for new GPUs. Right now new GPUs get online mostly in new DCs. TSMC fab capacity is much more limiting than DC building and it will likely keep being the case. It's much easier to build a DC than a fab.
Millions of dollars? Where did you get that number from?
...how much do you think each rocket launch costs?
Not millions of dollars per sat. Are you being intentionally obtuse?
Are you intentionally misreading what I'm saying?
Self destruction is a feature, not a bug.
That said eventually they can be lifted to higher orbits and have robots deliver and swap updated compute (if not made in space itself!).
"Space datacenter" -> overpriced starlink with some shitty edge compute -> "look guys, we built a space datacenter; earnings results to follow" -> number go up.
How much power could we get out of the fuel required to launch a 20-100kW rack in to space, if we were to burn it on the ground?
I don’t think Sun synchronous orbit is possible except in LEO.
LEO is high risk and star link satellites deorbit or burn up all the time. Not good from a capex POV on graphics cards.
The area you need in radiators is only half the area you need in solar panels. So it's definitely not a deal breaker.
Its still very dumb because of economics, logistics, serviceability and more.
Solar on earth was dumb because of logistics, right?
Things get cheaper.
Long term maybe, but I don't think it makes sense now, and won't for many years.
Pretty much everything has been "very dumb because of economics, logistics, serviceability and more". What kind of hacker are you to be on this site lol
SpaceX have presented on this and it's fairly straightforward and they already do it with starlink satellites, just at a larger scale. Sound like you are the uniformed one (or an EDS victim)
Starlink satellites don't generate the sort of heat a datacenter full of GPUs does. The ISS has enormous radiators, and it's only in space because it's a space station. Putting datacenters there is just goofy given the amount of available space on the ground.
All of that has been repeatedly addressed in anything that discusses it, if you care to try to understand. It has ~nothing to do with available space, the US grid can’t handle the current rate of expansion. It’s bad enough that apparently Span, the smart electrical panel company, is pitching a box full of Blackwells that’ll sit outside new construction homes and use all the headroom on residential 200A circuits. Space is starting to look reasonable.
Related, US readers should call their reps and ask them to support a successor to EPRA, the Energy Permitting Reform Act, the vast majority of the generation that’s waiting for approval is from clean energy sources. It nearly got over the line before the last Congress ended, and it’s one of the most impactful things we can do to combat climate change, combined with electrifying various carbon intensive activities.
> the US grid can’t handle the current rate of expansion
This is a self defeating argument. Neither can space!
Any scenario in which you can get data centers and power into orbit is easier on land.
Not quite, I'm rooting for the solar/battery microgrids down here, one of the startups I've invested in is working on those, but you don't really even need batteries for panels in a dawn-dusk sun synchronous orbit, which is a pretty huge advantage. Also, there aren't weeks where you have 1/4 the output because it's just cloudy all week, and your output isn't crushed during winter.
And the hardest part of my home solar install, by far, was the counterparties (inspectors, power company, and subcontractors). My understanding is that it's much worse when you're trying to get a grid scale install online, the interconnection queue is currently years long. This avoids most counterparties except the ones they're already routinely dealing with.
Why are you comparing the output of a datacenter to the output of a single sat?
How much power do starlink sats draw and how does it compare to say 8x H200s?
> This gives us access to more than 300 megawatts of new capacity (over 220,000 NVIDIA GPUs) within the month.
27,500 satellites need launching - fast! - just for Claude to meet a demand spike?
There are over 10k starlink sats in orbit already. They’re obtaining approval for another ~30k.
So clearly not a problem for them.
I've heard this before and these are not comparable at all. Starlink is missing a few digits in it's power usage and heat dissipation needs compared to a datacenter.
Why did we start saying EDS this week, just in time for the IPO?
EDS? Like still believing Elon's claims are truthful?
what do you mean they can’t get rid of heat? radiators exist
https://en.wikipedia.org/wiki/External_Active_Thermal_Contro...
All that gets you 70kW of cooling. Radiating to vacuum isn't very efficient.
And that’s sufficient for roughly 100 unoptimized DC grade H200s.
Not efficient, and it doesn’t have to be, because the cooling system has 0 opex cost. And capex clearly can be made to wor
OK, so you've got a football-sized solar and radiator array to support 100 H200s.
Why are we not building it on land again in some abandoned mall's parking lot?
Because it’s allegedly more expensive
Dang, sucks we can never improve any technology. Let's just call it quits, guys.
Maybe one day we can, but it's definitely not in a category "there is no doubt".
Of course not, where's the fun in that category?
Technology is wonderful.
Physics still gets a say.
Cooling space station is very different from cooling off chips. One requires extensive piping, other - a simple radiator.
Both require the same thing - moving heat - and you’ll find plenty of piping in a datacenter for this reason.
Space radiators are not very efficient due to lack of airflow in space.
Efficiency in the cooling loop is of no consequence as it has 0 opex cost in space. Do the capex numbers make sense?
Scott Manley, I’d say one of the top pop space youtubers say otherwise. If anything it’s easier in space. On earth most complexity in datacenter is cooling. In space you just radiate it away.
And SpaceX already proven they can launch sort of datacenters 10k times by launching Starlink (up to 20KW of solar each IIRC).
FWIW Musk should support Bernie Sanders more. Putting moratoriums on datacenters would make space based ones far more economical.
He just mentions and walks through idea of having some amount of compute up there and what the heat rejection calculations roughly look like. He doesn't actually explore the economics of doing such a thing or discuss if it's actually worth doing.
It's not that you can't put a server in space, but the costs to do it almost assuredly don't make any sense. Because, if you can do it in space you can do it easier on the ground and save yourself millions in launch cost and extra complexity. Your cooling challenges are way cheaper and simpler in an atmosphere.
There's nothing much being in space really gets you, other than it makes it harder for a government to take your computers away. Not impossible, just harder.
Especially with everyone clamoring to have datacenters built in their backyards. There's absolutely no way there can be an advantage to figuring out compute outside Earth's magnetosphere, especially since none of the engineers as SpaceX would ever think of any long-term benefits of that.
I'm just responding to op saying it's impossible to get rid of heat. None of us touched economics.
“YouTuber” is an extremely poor qualification for a supposedly trusted source
He's a physicist though, not just a mic jockey
Let's hear what Big Money Salvia has to say about all this
Cost.
The economics don't work unless Starship is doing flights in quantity, and it has met or exceeded its cost targets.
Roughly, a single rack plus solar to power it in the $15m+ range just to launch. (This assumes power dissipation is handled via some means that does not require launch to orbit. Also does not include batteries.) Choose your own hardware for the rack, but call it < $5m.
SpaceX earning $15m every time someone launches a $5m rack would be a great business for SpaceX.
Use your own calculator/LLM, but mine is suggesting that the ~$7B Colossus 1 data center in TFA would be around $50B if launched on Falcon 9 (still ignoring cooling and batteries).
(There are obviously a lot of other asterisks. I'm ignoring power storage and heat dissipation. Maintenance probably doesn't matter given 75% of cost is in the launch. Network bandwidth could be a problem considering how DCs are used. Competition - if Company A spends $100B for $25B of actual AI infra, how competitive will they be against Company B who gets $100B for their $100B by spending it in Canada or Mexico, which they can do right now? Etc.)
None of this works without Starship, which has not set a date for its first LEO insertion test yet. Yet the whole point of orbital DCs is nothing on the ground can move fast enough, hence the rush to orbit...which can't really move at all right now.
No, it doesn't make any sense.
In space you get bit flips fairly quickly when using very small transistors. You would have to run stuff on fairly old hardware, which probably makes the whole thing economically inefficient for serious "computation in space".