Silly question maybe, but didn’t Boston Dynamics have videos of bipedal robots doing acrobatics / running ~7/8 years ago? Kinda looked like they “solved” locomotion then

Their approach required pre-computation and simulation before execution. If you watch their videos carefully, you can see the advance planning work on some of the screens.

Some of it was pre-computed. The middle layer, if you like. The Boston Dynamics group had walking gait of a sort nailed in the 80's; the trotting-on-the-spot that BigDog did was essentially a continuation of those mechanics and that's all based on a conceptually simple balance problem which is intrinsically reactive and not pre-planned. So that's what was going on at the lowest level.

At the top level you have the actual environment, with those meme videos of the robot trotting through a car park, getting kicked off balance, and recovering. The whole point of those tests was to demonstrate how robust their tech was to non-precomputed disturbances.

And between the two you've got the direction and planning layer, telling the robot to go from A to B with some set of suitably convoluted parameters that nobody but the operators would have understood. That planning layer might do all sorts of pre-computation and simulation but it needs to do it in the context of a noisy and possibly adversarial environment. That's equally true for Atlas as much as it was for BigDog, even when there's nobody actually kicking it. What I suspect the precompute and simulation is doing at that layer is a) checking for physical viability of the requested route, and b) parameter tuning in response to sensor readings over a number of runs. Not telling the robot the exact sequence of motions. But I'm nowhere near those teams (oh, I wish) to comment on whether that's true - maybe someone else round here is.

I can understand pre-computation making the “software” problem of locomotion easier, but how does it help with the hardware problems laid out in the article, ie repeated very high load over a very short amount of time?

BD used hydraulics for a long time. Works, but inefficient. You have to carry the actuators, the tank, probably a hydraulic accumulator, the pump, valves, and the power source for the pump. That's why BD's machines were so big. Someone at Google said "We need to have a conversation about hydraulics", and the dog robot in 2019 was the first all-electric machine.

plenty BD clips of old atlas include oil lines bursting and showing the room with oil.

it's indeed a mess.

BD was under business pressures, and a computerized automaton doing baked ninja back-flips with servos is more impressive than inexpensive FK/IK demos dead-lifting 1000lbs. Google broke that company with their opinions.

Even if private labs have a viable platform solution, people won't care unless they can clone it for free. Not a lot of incentive for design change, but building Kryten 2X4B-523P would be hilarious. =3

overclocking a CPU might make it seem that you solved something and gotten better performance, but sooner or later it breaks down, as I read the article I believe that pre-computation essentially allows you to "overclock" the hardware, and make it seem that you have solved the problem of locomotion when what you have actually done is made something that looks impressive for a very much shorter of time than is usually used to calculate what the hardware can bear.

on edit: apologies if my analogy is not the best.