There's a great deal of space between effectively human and god machine. Effectively human meaning it takes 20 years to train it and then it's good at one thing and ok at some other things, if you're lucky. We expect more from LLMs right now, like being able to have very broad knowledge and be able to ingest vastly more context than a human can every time they're used. So we probably don't just think of or want a human intelligence.. or we want an instant specific one, and the process of being about to generate an instant specific one would surely be further down the line to your god like machine anyway.

The measure of human intelligence is never what humans are good at, but rather the capabilities of humans to figure out stuff they haven't before. Meaning, we can create and build new pathways inside our brains to perform and optimize tasks we have not done before. Practicing, then, reinforces these pathways. In a sense we do what we wish LLMs could - we use our intelligence to train ourselves.

It's a long (ish) process, but it's this process that actually composes human intelligence. I could take a random human right now and drop them somewhere they've never been before, and they will figure it out.

For example, you may be shocked to know that the human brain has no pathways for reading, as opposed to spoken language. We have to manually make those. We are, literally, modifying our brains when we learn new skills.

> For example, you may be shocked to know that the human brain has no pathways for reading, as opposed to spoken language.

I'm not shocked at all.

> I could take a random human right now and drop them somewhere they've never been before, and they will figure it out.

Yes, well not really. You could drop them anywhere in the human world, in their body. And even then, if you dropped me into a warehouse in China I'd have no idea what to do, I'd be culturally lost and unable to understand the language. And I'd want to go home. So yes you could drop in a human but they wouldn't then just perform work like an automonon. You couldn't drop their mind into a non human body and expect anything interesting to happen, and you certainly couldn't drop them anywhere inhospitable. Nearer to your example, you couldn't drop a football player into a maths convention and a maths professor into a football game and expect good results. The point of an AI is to be useful. I think AGI is very far away and maybe not even possible, whereas specific AIs are already abound.

[deleted]

It doesn't take 20 years for humans to train new tasks. Perhaps to master very complicated tasks, but there is many tasks you can certainly learn to do in a short amount of time. For example, "Take this hammer, and put nails in top 4 corners of this box, turn it around, do the same". You can master that relatively easy. An AGI ought to be able to practically all such tasks.

In any case, general intelligence merely means the capability to do so, not the amount of time it takes. I would certainly bet a physical theorist for example can learn to code in a matter of days despite never having been introduced to a computer before, because our intelligence is based on a very interconnected world model.

It takes about 10 years to train a human to do anything useful after creation.

A 4 year old can navigate the world better than any AI robot can

While I'm constantly disappointed by self driving cars, I do get the impression they're better at navigating the world than I was when I was four. And in public roads specifically, better than when I was fourteen.