The measure of human intelligence is never what humans are good at, but rather the capabilities of humans to figure out stuff they haven't before. Meaning, we can create and build new pathways inside our brains to perform and optimize tasks we have not done before. Practicing, then, reinforces these pathways. In a sense we do what we wish LLMs could - we use our intelligence to train ourselves.

It's a long (ish) process, but it's this process that actually composes human intelligence. I could take a random human right now and drop them somewhere they've never been before, and they will figure it out.

For example, you may be shocked to know that the human brain has no pathways for reading, as opposed to spoken language. We have to manually make those. We are, literally, modifying our brains when we learn new skills.

> For example, you may be shocked to know that the human brain has no pathways for reading, as opposed to spoken language.

I'm not shocked at all.

> I could take a random human right now and drop them somewhere they've never been before, and they will figure it out.

Yes, well not really. You could drop them anywhere in the human world, in their body. And even then, if you dropped me into a warehouse in China I'd have no idea what to do, I'd be culturally lost and unable to understand the language. And I'd want to go home. So yes you could drop in a human but they wouldn't then just perform work like an automonon. You couldn't drop their mind into a non human body and expect anything interesting to happen, and you certainly couldn't drop them anywhere inhospitable. Nearer to your example, you couldn't drop a football player into a maths convention and a maths professor into a football game and expect good results. The point of an AI is to be useful. I think AGI is very far away and maybe not even possible, whereas specific AIs are already abound.

[deleted]