The compression effect a telephoto has can be used even more dramatically to tie together different planes in a scene

This somehow is a common misconception from non-engineers. I read and believed that when I was 14 years old, at some point I tested on photoshop to overlay pictures taken at different zooms factors and found that telephoto DO NOT compress scenes.

Its the fact that you are far away from the subject that compresses distances.

Once you have decided on the constraint to use a telephoto (to compress distances), you then move yourself away (as the article said) from the scene to be shot so that it fits the zoom factor. The relatives distances are what makes the compression, not the glass inside the lens. You could also take a wide picture and make a digital crop.

I interpreted "compression effect" to mean exactly what you're saying it does. I'm not familiar with what you're saying it does not do. I have never heard of that misconception.

Parent directly quoted the relevant text:

> effect a telephoto has

They're saying that it is not a property of the lens, but rather of the perspective of the scene viewed from a distance. You'd get the same effect using any focal length lens, taking the shot from the same location, and cropping appropriately.

Perhaps in contrast to depth of field which is a property of the lens.

https://www.tamron.com/global/consumer/sp/impression/detail/...

I'm looking at the portraits of the woman on the beach and I'm not understanding how to get from one to the other with cropping. What am I missing?

The photographer had to move quite a bit further back to get the subject to be the same size in the frame at 150mm as the subject was at 35mm.

They could have used the 35mm lens at the same distance as the 150mm lens and simply cropped and the perspective compression would be the same (it'd just be a lower resolution image).

You have to take several steps back, then take the picture and crop.

Of course, on many cameras you then would get a smudgy or pixelated mess.

Not only is a digital crop an absolute loss in resolution, different focal lengths produce different image quality, from the different perceived thickness of the zone of focus at equivalent stop and the stength of the blurring outside the zone of focus.

Edit0: Obviously you're also see the thing you're trying to get a picture of better

Of course ! This can be explained if we have more time. But the basic explanation should never lead us to believe compression is the property of a lens

I think you're misunderstanding, and have come up with a strawman here.

What you're describing as correct is what people understand. Of course it's the fact that you're far away. I think it goes without saying that you can't use a telephoto lens inside of a room or something.

And yes, of course you could take a wide picture and make a crop. But the resolution would be terrible. The whole point of a telephoto lens is to take that tiny crop of your environment at full resolution.

I'm sorry you learned it wrong at age 14 and maybe wherever you got it from really did explain it badly. But it's standard for professionals to talk about the effect of a long lens in this way, that the camera will be further away.

Of course. I wish it was better explained. No, not everyone interprets the words to get the correct view. The words used are awful and some photographers are responsible of this.

I don't think it's a strawman. I've definitely seen a lot of people think that the perspective compression is a result of lens choice rather than camera position.

Taking it a step further, compression is not a function of distance either. It's how parallel the rays are. You could also get compression up close by capturing the light field with some sort of spatially distributed camera (pushbroom camera?).

But if you had an algorithm that would change directions of rays, you would have a resulting image with implicitly a different position of the camera (closer or further away), no ? Unless you do some kind of psychedelic deformation.

Anyway, I'd say you're technically correct but you might miss some angles and have some holes in the resulting images. But now with gaussian splats and AI we could reconstruct holes easily

In practice, you might struggle to do it well, but in principle, it could be a gigantic image sensor with no lens but a collimator on each pixel. You can angle the collimators to collect rays that would otherwise end up at the far-away camera.

Also, satellites photographing the Earth do it by moving the camera, and they can produce compression effects beyond what you'd get just because of their distance.

For satellite you’re talking about taking the same surface on earth from different angles as the satellite orbits ?

No. I mean pointing the camera in a fixed direction as the satellite orbits, so it scans a strip along the surface of the Earth. This makes the rays parallel across the field of view (in the movement direction), so it looks like the camera is infinitely far away.

It took me a while to really believe that in a perfect, "spherical cow" kind of way, zoom = crop. Which is how digital zoom works.

Of course analog zoom > crop, but only because reality < theory.

You can easily correct lens distortion with Adobe Camera Raw, then what’s left (the difference) is almost just chromatic aberration and resolution

A similar misconception is that bigger sensors give shallower DOF than smaller sensors, which is also an effect of your position relative to the subject rather than the sensor itself.

Interesting !