This is incredible and wonderful news, huge congratulations! As someone who works at the intersection of design and engineering, the detail about delivering "starless versions" so the credit typography doesn't compete with the bright stars is exactly the kind of invisible technical problem-solving I love reading about on here.

On a personal note, I find it very refreshing to hear that a major studio opted for real captured photography. Love that they specifically wanted the authenticity of real narrowband data and that speaks to the production team's vision. Enjoy the premiere night, feel incredibly proud. I was already planning on watching the movie this weekend (it releases here on the 26th) and now I'm doubly excited because I know this neat little tidbit.

I'm pretty sure this "Dad did something crazy" moment is going to be a core memory for your kids. Congrats!

I'm curious how the starless versions are created. From the steps at the end, I couldn't see a 'this is how stars are removed' step. Maybe it's part of stacking (but most stars would remain present?) or the calibration process treating stars as noise?

Traditionally (pre-ai) you would use another image of the same part of the sky and negate the items that you want to remove from the image

As an example terrestrial telescope mirrors get dusty. You're not going to break down the scope just to clean up the dust as this is a many days operation in most cases. So instead you would take "flats" that were of a pure white background and thus showed the dust in its full, dusty, glory. When you take your actual images, you negate (subtract from the original image) the flat and thus any noise generated by the dust. You can use this same method for removing brighter stars from an image that would otherwise saturate the ccd and wash out the background. Turns out it doesn't work for planes. Ask me how I know!

  > Traditionally (pre-ai) you would use another image of the same part of the sky and negate the items that you want to remove from the image.
I'm not an astrophotographer, so I'm interested about why that method would work for stars. Are not stars fixed in relation to the images taken? I could see how the technique would work with planets, maybe, but not stars.

Why does the technique not work with aircraft? Because they generally fly on fixed routes?

Earth moves - that's how you get the next shot without repositioning the telescope.

This time-lapse probably better visualizes it: https://www.youtube.com/watch?v=wFpeM3fxJoQ

As the Earth rotates over the course of the night, the background stars and nebulae move as a single unit, no?

Maybe for some close stars parallax might work to remove them over the course of half a year. But no way could the Earth's rotation during a single night move background stars out of a nebulae.

Sure, but the nebulae also move along with the stars. The questions is how one can subtract the stars without also subtracting the nebulae. (I'm assuming different filters and/or a database of known star positions)

The ESA catalog is not precise enough to remove a star from an image of the structure of a nebulae - never mind Hipparcos. Filters while photographing and image processing in post are the way to go.

Don't forget that not only does the star need to be removed, but also the diffraction spikes. Those are internal reflections in the lens assembly - not mapped by any star catalog ))

Makes sense, thanks!

>> I'm curious how the starless versions are created.

Its done with using dedicated astrophotography software (StarXTerminator). Example: https://astrobackyard.com/starnet-astrophotography/

So these are more artistic photo works than real science photos...

Rod Prazeres the Astrophotographer, has given this interview where he talks about the process: https://www.astronomy.com/observing/the-astrophotography-of-...

So this part of the blog post is essentially false: "no generative AI of any kind"

I have yet to see a precise technical definition of what "generative AI" means, but StarXTerminator uses a neural network that generates new data to fill in the gaps where non-stellar objects are obscured by stars. And it advertises itself as "AI powered".

I don't consider photos I take on iPhone to be "AI generated" or even "AI augmented" even though iPhone uses neural networks and "AI" to do basic stuff like low light photography, blurring backgrounds, etc.

I agree that I wouldn't call these photos "AI generated", because the majority of what you're seeing is real.

But that's very different to saying that no generative AI was used at all in their production. "AI augmented" sounds pretty accurate to me.

Likewise, if someone posted a photo taken with their iPhone where they had used the built-in AI features to (for instance) remove people or objects, and then they claimed that no AI was involved, I would consider that misleading, even if the photo accurately depicts a real scene in other respects.

As a photographer and machine learning guy, I would call a lot of modern phone photos AI augmented. AI to stack photos or figure out what counts as the background is a little bit of a gray area, but an img-to-img CNN is about as close as you can get to full AI generation without a full GAN or diffusion model.

So funny people are downvoting you...

https://astrobackyard.com/starnet-astrophotography/

“StarNet is a neural network that can remove stars from images in one simple step leaving only the background. More technically, it is a convolutional residual net with encoder-decoder architecture and with L1, Adversarial and Perceptual losses.”

  > So these are more artistic photo works than real science photos...
I disagree. If there are many flies around a statue, and I photograph the statue but remove the flies in the photo (via AI or any other technique), then I'm still producing an image of something that exists in the world - exactly as it appears in the world.

I agree that the claim "no generative AI used" is technically incorrect, but I do feel that the image does not contain any AI-hallucinated content and therefore is an accurate representation of reality. These structures appear in the image exactly as they exist in nature.

AI-related definitions aside, if it's a strictly subtractive/destructive tool that only removes light, it's hard to characterise as "generative" and arguably not much different to filtering frequencies!

It's not just "removing light", because if you removed all the light from stars, you would be left with black spots instead of white spots. The stars are bright enough to completely saturate a region of the image sensor. So there was actually no data recorded about what was in that particular part of the nebula or whatever.

The "generative" part is that the algorithm is filling in a plausible guess as to what would have been observed if there was no star "in the way".

[deleted]

I feel like the stars are probably pretty easy to mask out since they’re very bright relative to the rest of the image. Once you have the mask, each one is small enough that you could probably fill it with the values from adjacent pixels. Kinda like sensor mapping to hide dead pixels. That’s just a guess though, I’m sure there’s more to it than that.

Bright stars are so bright they literally mask areas of the sky. You'll probably need deconvolution algorithms (CLEAN being the standard some time ago, don't know whether some AI/deep-inv approach works nowadays...) to remove them.

There are several “AI” deconvolution tools to remove stars which work exceptionally well: two of the most popular ones being StarNet and RC-Astro’s StarXTerminator. I’m willing to bet that the author used the latter for star removal as it’s become something of a standard in the astrophotography world.