First stage, either digitally generate (synthetic) basic movements, or record basic human recorded movements of a model. The former is probably better and an generate endless variation.
But the model is only trying to control joint angles, position etc. no worries about controlling power. The simulate system has no complications like friction.
The you train with friction, joint viscosity, power deviance from demand based on up, down times, fade, etc.
Then train in a complex simulated environment.
Then train for control.
Etc.
The point being, robotic control is easy to be broken down into small steps of capability.
That massively improves training speed and efficiency, even potentially smaller models.
It is also a fear simpler task by many orders of magnitude to learning the corpus of the written internet.
Comparable to that, would be training an AI to operate with any land, sea or air device. Which, nobody today is trying, (AFAIK)
It's so easy! I hope all the leading robotics researchers come to find this comment and finally deliver us the dexterous humanoid robots we've all been waiting for
Well, in fairness, the kind of deep neural architectures needed to do this stuff have only been available for a relatively short period. The robotics researchers in my institution are basically racing to put all this new capability to work.
Do you know of success stories here? Success of transferring models learned in physics simulation to the real world.
When we (ZenRobotics) tried this 15 years ago a big problem was the creation of sufficiently high-fidelity simulated worlds. Gathering statistics and modelling the geometry, brittleness, flexibility, surface texture, friction, variable density etc of a sufficiently large variety of objects was harder than gathering data from the real world.
We have massively better physics simulations today than 15 years ago, so the limitations you found back then don't apply today. It might still not be enough, but 15 years is such a long time with Moore's law and we already know all the physics so we just needed more computation to do what is needed.
So much easier in many ways.
You can train in stages.
First stage, either digitally generate (synthetic) basic movements, or record basic human recorded movements of a model. The former is probably better and an generate endless variation.
But the model is only trying to control joint angles, position etc. no worries about controlling power. The simulate system has no complications like friction.
The you train with friction, joint viscosity, power deviance from demand based on up, down times, fade, etc.
Then train in a complex simulated environment.
Then train for control.
Etc.
The point being, robotic control is easy to be broken down into small steps of capability.
That massively improves training speed and efficiency, even potentially smaller models.
It is also a fear simpler task by many orders of magnitude to learning the corpus of the written internet.
Comparable to that, would be training an AI to operate with any land, sea or air device. Which, nobody today is trying, (AFAIK)
It's so easy! I hope all the leading robotics researchers come to find this comment and finally deliver us the dexterous humanoid robots we've all been waiting for
Well, in fairness, the kind of deep neural architectures needed to do this stuff have only been available for a relatively short period. The robotics researchers in my institution are basically racing to put all this new capability to work.
Eg: https://hub.jhu.edu/2025/07/09/robot-performs-first-realisti...
Synthetic data works better for robots since you can generate endless scenarios based on real physical laws.
Do you know of success stories here? Success of transferring models learned in physics simulation to the real world.
When we (ZenRobotics) tried this 15 years ago a big problem was the creation of sufficiently high-fidelity simulated worlds. Gathering statistics and modelling the geometry, brittleness, flexibility, surface texture, friction, variable density etc of a sufficiently large variety of objects was harder than gathering data from the real world.
We have massively better physics simulations today than 15 years ago, so the limitations you found back then don't apply today. It might still not be enough, but 15 years is such a long time with Moore's law and we already know all the physics so we just needed more computation to do what is needed.
Example of modern physics simulation: https://www.youtube.com/watch?v=7NF3CdXkm68
Google has done training in simulation: https://x.company/projects/everyday-robots/#:~:text=other%20...
I believe this is the most popular tool now: https://github.com/google-deepmind/mujoco
Thanks for the links.
AFAICT these have not resulted in any shipping products.