There are two very distinct kinds of AI workloads that go into data centres:

    1. Inference
    2. Training
Inference just might be doable in space because it is "embarrassingly parallel" and can be deployed as a swarm of thousands of satellites, each carrying the equivalent of a single compute node with 8x GPUs. The inputs and outputs are just text, which is low bandwidth. The model parameters only need to be uploaded a few times a year, if that. Not much storage is required , just a bit of flash for the model, caching, logging, and the like. This is very similar to a Starlink satellites, just with bigger solar panels and some additional radiative cooling. Realistically, a spacecraft like this would use inference-optimised chips, not power-hungry general purpose NVIDIA GPUs, LPDDR5 instead of HBM, etc...

Training is a whole other ballgame. It is parallelisable, sure, but only through heroic efforts involving fantastically expensive network switches with petabits of aggregated bandwidth. It also needs more general-purpose GPUs, access to petabytes of data, etc. The name of the game here is to bring a hundred thousand or more GPUs into close proximity and connect them with a terabit or more per GPU to exchange data. This cannot be put into orbit with any near-future technologies! It would be a giant satellite with square kilometers of solar and cooling panels. It would certainly get hit sooner or later by space debris, not to mention the hazard it poses to other satellites.

The problem with putting inference-only into space is that training still needs to go somewhere, and current AI data centres are pulling double-duty: they're usable for both training and inference, or any mix of the two. The greatest challenge is that a training bleeding edge model needs the biggest possible clusters (approaching a million GPUs!) in one place, and that is the problem -- few places in the world can provide the ~gigawatt of power to light up something that big. Again, the problem here is that training workloads can't be spread out.

Space solves the "wrong" problem! We can distribute inference to thousands of datacentre locations here on Earth, each needs just hundreds of kilowatts. That's no problem.

It's the giaaaant clusters everyone is trying to build that are the problem.