I feel like I can intuit these things pretty well but others can't. For example I see everyone talking about LLMs replacing developers and I'm over here thinking there is absolutely no way an LLM is replacing me any time soon. I'll be using it to do my job faster and better sure, but it won't replace me. It can barely do a good job while I hold it's hand every step of the way. It often goes crazy and does all kinds of dumb stuff.

Similarly reading this article I agree with the author and I feel like what they're saying seems obvious. Of course making robots that can match humans' abilities is an absolutely insurmountable task. Yes, insurmountable as in I don't think we will ever do it.

Automating specific tasks in a factory is one thing, making a robot that can just figure out how to do things and learn like a human does is many orders of magnitude beyond. Even LLMs aren't there, as we can see from how they fail at basic tasks like counting the Rs in Raspberry. It's not intelligence it's just the illusion of intelligence. Actual intelligence requires learning. Not training. Actual intelligence won't run a command, fail to read it's output, make up the output and continue as if everything is fine while in fact nothing is fine. But LLMs will because they're stupid stochastic parrots, basically fancy search engines. It's really strange to me how everyone else seems blind to this.

Maybe if we some day figure out real artificial intelligence we will have a chance to make humanoids that can match our own abilities.