There's been a lot of talk about orbital DCs lately, but with these levels of density, orbital CDNs might be a more obvious usecase. It would be interesting to see if something like Starlink can use something like this to cache media content and reduce their overall data moving through the constellation. It could even be worth it to have some satellites in higher orbits (even GEO if the ground hw can reach it) dedicated to streaming media content. You can tolerate higher RTT for content that doesn't need to be real time.

no, absolutely not. orbital datacenters are never going to happen, it doesn't matter whether you try to frame them as compute or storage or whatever else.

the extreme density of these SSDs is actually an anti-feature in the context of spacecraft hardware.

the RAD750 CPU [0] for example uses a 150nm process node. its successor the RAD5500 [1] is down to 45nm. that's an order of magnitude larger than chips currently made for terrestrial uses.

radiation-hardening involves a lot of things, but in general the more tightly packed the transistors are, the more susceptible the chip is to damage. sending these SSDs to space would be an absurd waste of money because of how quickly they would degrade.

and then there's the power consumption & heat dissipation. one of these drives draws 25W [2] and Dell is bragging about cramming 40 of them into one server. that's a full kilowatt of power - essentially a space heater in a 2U form factor.

0: https://en.wikipedia.org/wiki/RAD750

1: https://en.wikipedia.org/wiki/RAD5500

2: https://americas.kioxia.com/content/dam/kioxia/en-us/busines...

AFAICT[1] the latest generation of SpaceX Starlink satellites use AMD Versal XQR SoCs, which are built on a 7nm process with components like the main processor (dual-core ARM Cortex A-72) and memory (DDR4) clocked in the gigahertz, not megahertz, range.[2] At least some of these SoCs models (presumably the lower-clocked ones) are certified for geosynchronous orbits, not just low-earth orbits.

[1] https://www.pcmag.com/news/amd-chips-are-powering-newest-sta...

[2] https://docs.amd.com/r/en-US/ds955-xqr-versal-ai-edge/Genera...

Just to put in my 2 cents as someone tangential and sometimes a little too close for comfort for the field: with FPGAs (aka the half of this SoC doing the real work) there's often a lot of work that goes into radiation tolerance and hardening which are different things. It's not like you can just put the chip in space and be done. You have to do things like scrubbing the "bitstream" regularly for errors, triple-module-redundancy (aka cut yourself to 1/3rd of the chip's capacity), and other stuff.

What is the designed lifespan of a Starlink satellite?

The RAD750 is like 20 years old and is the absolute king in high reliability in the most extreme radiation environments. LEO is much more forgiving and there's plenty of examples of commercial gear operating in it. You could definitely put this much storage into LEO along with some EDAC and be fine for a few years.

It's possible to run modern GPU on a sattellite: https://www.starcloud.com/starcloud-1

Some error rate is acceptable for uses which aren't "mission-critical".

Redundancy and error correction should be built into such huge arrays anyway. Its amount just needs to be increased.

> order of magnitude

It is much worse than that. Even taking the node names at face value[1] that is just one dimension, there are two/three[2] dimensions to consider so it would be 100x different.

Nehalem(2008) was a 45nm node based chip and had ~3MTr/mm2 transistors in comparison today we have 3nm(N3E/P/X/C) nodes(2023-4) from TSMC area about 220MTr/mm2.

Of course that is just one metric- transistor count, there are many other improvements to consider over the last two decades.

[1] Processor node names after all haven't been tied to physical scale for 30 years https://www.eejournal.com/article/no-more-nanometers

[2] HBM that modern GPUs use already leverage 3D ICs.

In the limit, packing transistors tighter should mean more radiation resistance, not less, because you can shield them with a smaller mass of water or lead or whatever.

It would be funny if lead-acid batteries end up being space-grade, should the lead be able to serve as a radiation shield.

IIRC, metals aren’t great shields because of secondary particles emitted after the first one hits the shield.

[flagged]

i can write extremely confident things in all lowercase and include citations too. [1]

doesn't mean i'm correct. [2]

It certainly looks more correct than a response like this, normally you'd expect a counter point instead of just doing whatever it is you are doing.

On Grahams hierarchy of disagreement, your comment is 3 tiers lower than the parent comment.

For the sake of the generations that come after us, we really should not dump valuable material into space. I somehow doubt the electronics in space would be recovered and recycled properly.

Nothing is recycled properly. Recycling was a story told to ease consumers minds so they keep on consuming. The stuff you throw away ends up on a landfill, in the sea, or on a ship to someplace else where it gets burned and then buried. Sending it to space makes absolutely no difference.

Most steel and aluminum and nearly all copper is recycled

How do you know?

It would be relatively obvious from the difference between production and mining output.

Or you could use fibre, which has the advantage of not needing to use > 1kw of concentrated microwave to get ~2gig of throughput

Or even better not yeeting it into an environment where its cooked/cooled every 90 minutes

Or even better where its not absolutely pelted by cosmic rays enough to obliterate a good GB a day of data.

Or space data centre.

If I correctly understand what you're suggesting, then that could save on uplink bandwidth. Sending one copy into space, and then sending it back down over and over again sounds nice.

But does it solve a problem that we actually have? Is uplink bandwidth a pressing limitation?

/me turning in my sleep muttering https://en.wikipedia.org/wiki/Teledesic