Can this be applied to camera shutter/motion blur, at low speeds the slight shake of the camera produces this type of blur. This is usually resolved with IBIS to stabilize the sensor.
Can this be applied to camera shutter/motion blur, at low speeds the slight shake of the camera produces this type of blur. This is usually resolved with IBIS to stabilize the sensor.
The ability to reverse is very dependent on the transformation being well known, in this case it is deterministic and known with certainty. Any algorithm to reverse motion blur will depend on the translation and rotation of the camera in physical space, and the best the algorithm could do will be limited by the uncertainty in estimating those values.
If you apply a fake motion blur like in photoshop or after effects then that could probably be reversed pretty well.
> and the best the algorithm could do will be limited by the uncertainty in estimating those values
That's relatively easy if you're assuming simple translation and rotation (simple camera movement), as opposed to a squiggle movement or something (e.g. from vibration or being knocked). Because you can simply detect how much sharper the image gets, and hone in on the right values.
I recall a paper from many years ago (early 2010s) describing methods to estimate the camera motion and remove motion blur from blurry image contents only. I think they used a quality metric on the resulting “unblurred” image as a loss function for learning the effective motion estimate. This was before deep learning took off; certainly today’s image models could do much better at assessing the quality of the unblurred image than a hand-crafted metric.
Probably not the exact paper you have in mind, but... https://jspan.github.io/projects/text-deblurring/index.html
Record gyro motion at time of shutter?
The missing piece of the puzzle is how to determine the blur kernel from the blurry image. There's a whole body of literature on that that's called blind deblurring.
For instance: https://deepinv.github.io/deepinv/auto_examples/blind-invers...
Absolutely, Photoshop has it:
https://helpx.adobe.com/photoshop/using/reduce-camera-shake-...
Or... from the note at the top, had it? Very strange, features are almost never removed. I really wonder what the architectural reason was here.
Just guessing, patent troll.
Oof, I hope not. I wonder if the architecture for GPU filters migrated, and this feature didn't get enough usage to warrant being rewritten from scratch?
I believe Microsoft of all people solved this a while ago by using the gyroscope in a phone to produce a de-blur kernel that cleaned up the image.
Its somewhere here: https://www.microsoft.com/en-us/research/product/computation...
I wonder if the "night mode" on newer phone cameras is doing something similar. Take a long exposure, use the IMU to produce a kernel that tidies up the image post facto. The night mode on my S24 actually produces some fuzzy, noisy artifacts that aren't terribly different from the artifacts in the OP's deblurs.