I like how details fade around the edges -- though for maximum accuracy, there should only be a tiny area of high detail in the center, with most of the visual field being indistinct (as well as a total blind spot to one side). The brain just knows how to fill in remembered details of stuff you're not looking at directly, same way you tune out the sight of your own nose. Gaze-tracking and foveated rendering is a neat way of taking advantage of this quirk to speed up graphical processing:

https://en.wikipedia.org/wiki/Foveated_rendering

And the eye's periphery, while it isn't sharp, is highly sensitive to movement. Which is "obvious" if you ponder the question where dangerous things appear first. Thus things dangling from the rear mirror in a car are a bad thing, they need (subconscious) attention.

The cone cells in the eye's center are color sensitive, but need a lot of light, while the rod cells at the edges are highly sensitive to motion, even in low light. And that might be one of the reasons why flicker is strenuous for the eyes. Funny side effect is that looking at stars in the night sky seems to work better when you look slightly besides a star, I guess that's because then the low light parts take over.

> Thus things dangling from the rear mirror in a car are a bad thing, they need (subconscious) attention.

And open offices with the associated foot traffic. Constant distraction quite apart from the noise factor.

I would argue that the viewer's eye already provides this effect. Whichever part of the image you focus on is sharp; the rest is indistinct. The result is that we are drawn into the scene better; we see as if our eye were allowed to roam around the scene as his was, rather than seeing the much more limited perspective with a fixed gaze.