Historically learning and AI systems, if you plug the output into the input (more or less), spiral off into lala land.

I think this happens with humans in places like social media echo chambers (or parts of academia) when they talk and talk and talk a whole lot without contact with any outer reality. It can be a source of creativity but also madness and insane ideas.

I’m quite firmly on the side of learning requiring either direct or indirect (informed by others) embodiment, or at least access to something outside. I don’t think a closed system can learn, and I suspect that this may reflect the fact that entropy increases in a closed system (second law).

As I said recently in another thread, I think self contemplating self improving “foom” AI scenarios are proposing informatic perpetual motion or infinite energy machines.

Everything has to “touch grass.”

> Everything has to “touch grass.”

Not wrong, but it's been said that a videoclip of an apple falling on Newton's head is technically enough information to infer the theory of relativity. You don't need a lot of grass, with a well-ordered mind.

Is that true? Seems dubious to me. The scale in time, velocity, and space is below where relativity becomes visible beyond Planck level scales that certainly don’t show up in a video clip.

It might be enough to deduce Newtonian motion if you have a lot of the required priors already.

A lot of telescope data over time combined with a strong math model and a lot of other priors is probably enough to get relativity. You have to be able to see things like planetary motion and that the results don’t match Newton exactly, and then you need enough data to fit to a different model. You probably also need to know a lot about the behavior of light.

Said by Eliezer Yudkowski, a known AI-chill, cult leader and HP fanfic writer with no education.