I'm a bit surprised that Brooks is focusing on sensory and electro-mechanical issues as being what's holding back humanoid robotics...
Just being the shape of a human means nothing if it doesn't also have the brain of a human, which is needed if it's meant to be general purpose, or at least adaptable to a useful variety of tasks.
How exactly are these robots meant to be trained for new tasks (or just same task, different factory, or different work station)? It seems that to be useful they'd need to be able to learn on the job. Even if they've been pre-trained in simulation to do a wide variety of things, and take natural language or imitation instruction, they are still going to need to bridge the sim-2-real gap, and book smart to proficient gap, to be able to perform, and that'd at least require runtime learning.
Tesla/Musk seem to think that they're already in the robotics business with self-driving cars trained in simulation, but a car only has two degrees of freedom (speed and steering angle), and only performs one task - driving. A general-purpose humanoid robot is a whole other level of complexity.
>How exactly are these robots meant to be trained for new tasks (or just same task, different factory, or different work station)? It seems that to be useful they'd need to be able to learn on the job.
The plan for raw intelligence is to push transformers as far as we can. What's the full extent of ICL for robotics ? We don't know the answer to that yet.
This was released just a few days ago, https://www.skild.ai/blogs/omni-bodied