They're a trivial inference problem, not a trivial problem to solve as such.

As in, if i need to infer the radius of a circle from N points sampled from that cirlce.. yes, I'm sure there's a textbook of algorithms/etc. with a lot of work spent on them.

But in the sense of statistical inference, you're only learning a property of a distribution given that distribution.. there isn't any inferential gap. As N->inf, you recover the entire circle itself.

compare with say, learning the 3d structure of an object from 2d photographs. At any rotation of that object, you have a new pixel distribution. So in pixel-space a 3d object is an infinite number of distributions; and the inference goal in pixel-space is to choose between sets of these infinities.

That's actually impossible without bridging information (ie., some theory). And in practice, it isn't solved in pixel space... you suppose some 3d geometry and use data to refine it. So you solve it in 3d-object-property-space.

With AI techniques, you have ones which work on abstracta (eg., circles) being used on measurement data. So you're solving the 3d/2d problem in pixel space, expecting this to work because "objects are made out of pixels, arent they?" NO.

So there's a huge inferential gap that you cannot bridge here. And the young AI fantatics in research keep milling out papers showing that it does work, so long as its a cirlce, chess, or some abstract game.