Per 5, it says here "Human hands are packed absolutely full of sensors. Getting anywhere near that kind of sensing out of robot hands and usable by a human puppeteer is not currently possible."

Then another quote, "No one has managed to get articulated fingers (i.e., fingers with joints in them) that are robust enough, have enough force, nor enough lifetime, for real industrial applications."

So (3) and (7) are relevant to lifetime, but another point, related to sensors, is that humans will stop hurting themselves if finger strain occurs, such as by changing their grip or crying off the task entirely. Hands are robust because they can operate at the edge of safe parameters by sensing strain and strategizing around risk. Humans know to come in out of the rain, so to speak.

I have come to realize that we barely understand complexity. I've read a lot on information theory, thermodynamics, many takes on entropy. Not to mention literature on software development, because a lot of this field is managing complexity.

We severely underestimate how complex natural systems are. Autonomous agents seem like something we should be able to build. The idea is as old as digital computers. Turing famously wrote about that.

But an autonomous complex system is complex to an astronomical degree. Self driving vehicles, let alone autonomous androids, are several orders of magnitude more complex that we can even model.

Seems related to:

https://en.wikipedia.org/wiki/Variety_(cybernetics)

Yes! Thank you!

I have read Wiener and Ashby to reach this conclusion. I've used this argument before. A piece of software capable of creating any possible software would be infinitely complex. Also the reason I don't buy the "20 w general intelligence exists". The wattage for generally intelligent humans would be the entire energy input to the biosphere up to the evolution of humans.

Planetary biospheres show general intelligence, not individual chunks of head meat.

That knowledge held in evolution equates to "training" for an AGI, I guess. Mimicking 4 billion years of evolution shouldn't take that long ... but it does sound kind of expensive now you mention it.

[flagged]

Now I'm imagining a brain in a jar, but with every world-mimicking evolved aspect of the brain removed. Like, it has no implicit knowledge of sound waves or shapes or - well, maybe those low-level things are processed in the ears and retinas, but it has no next-stage anticipation of audio or visual data, either, and no body plan that relates to the body's nerves, and no relationship to digestion or hormones or gravity or jump scares or anything else that would prepare it for being monkey-shaped and living in the world. But, it has the key thing for intelligence, the secret sauce, whatever that is. So it can sit there and be intelligent.

Then you can connect it up to some input and output, and ... it exhibits intelligence somehow. Initially by screaming like a baby. Then it adapts to the knowledge implicit in its input and output systems ... and that's down to the designer. If it has suction cup end effectors and a CCD image sensor array doobrie ... I guess it's going to be clumsy and bewildered. But would it be noticeably intelligent? Could it even scream like a baby, actually? I suppose our brains are pre-evolved to learn to talk. Maybe this unfortunate person would only be able to emit a static hiss. I can't decide if I think it would ever get anywhere and develop appreciable smarts or not.

[flagged]

I feel like I can intuit these things pretty well but others can't. For example I see everyone talking about LLMs replacing developers and I'm over here thinking there is absolutely no way an LLM is replacing me any time soon. I'll be using it to do my job faster and better sure, but it won't replace me. It can barely do a good job while I hold it's hand every step of the way. It often goes crazy and does all kinds of dumb stuff.

Similarly reading this article I agree with the author and I feel like what they're saying seems obvious. Of course making robots that can match humans' abilities is an absolutely insurmountable task. Yes, insurmountable as in I don't think we will ever do it.

Automating specific tasks in a factory is one thing, making a robot that can just figure out how to do things and learn like a human does is many orders of magnitude beyond. Even LLMs aren't there, as we can see from how they fail at basic tasks like counting the Rs in Raspberry. It's not intelligence it's just the illusion of intelligence. Actual intelligence requires learning. Not training. Actual intelligence won't run a command, fail to read it's output, make up the output and continue as if everything is fine while in fact nothing is fine. But LLMs will because they're stupid stochastic parrots, basically fancy search engines. It's really strange to me how everyone else seems blind to this.

Maybe if we some day figure out real artificial intelligence we will have a chance to make humanoids that can match our own abilities.

Also to prevent breaking other things or hurting others. That’s also why robots will have tons of safety issues for a while