Taking it a step further, compression is not a function of distance either. It's how parallel the rays are. You could also get compression up close by capturing the light field with some sort of spatially distributed camera (pushbroom camera?).

But if you had an algorithm that would change directions of rays, you would have a resulting image with implicitly a different position of the camera (closer or further away), no ? Unless you do some kind of psychedelic deformation.

Anyway, I'd say you're technically correct but you might miss some angles and have some holes in the resulting images. But now with gaussian splats and AI we could reconstruct holes easily

In practice, you might struggle to do it well, but in principle, it could be a gigantic image sensor with no lens but a collimator on each pixel. You can angle the collimators to collect rays that would otherwise end up at the far-away camera.

Also, satellites photographing the Earth do it by moving the camera, and they can produce compression effects beyond what you'd get just because of their distance.

For satellite you’re talking about taking the same surface on earth from different angles as the satellite orbits ?

No. I mean pointing the camera in a fixed direction as the satellite orbits, so it scans a strip along the surface of the Earth. This makes the rays parallel across the field of view (in the movement direction), so it looks like the camera is infinitely far away.