Agreed, conceptually.
BUT. For 99% of tasks I'm totally certain there's people out there that are orders of magnitude better at them than me.
If the AI can regurgitate their thinking, my output is better.
Humans may need to think to advance the state of the art.
Humans may not need to think to just... do stuff.
> For 99% of tasks I'm totally certain there's people out there that are orders of magnitude better at them than me.
And LLMs slurped some of those together with the output of thousands of people who’d do the task worse, and you have no way of forcing it to be the good one every time.
> If the AI can regurgitate their thinking, my output is better.
But it can’t. Not definitively and consistently, so that hypothetical is about as meaningful as “if I had a magic wand to end world hunger, I’d use it”.
> Humans may not need to think to just... do stuff.
If you don’t think to do regular things, you won’t be able to think to do advanced things. It’s akin to any muscle; you don’t use it, it atrophies.
> And LLMs slurped some of those together with the output of thousands of people who’d do the task worse, and you have no way of forcing it to be the good one every time.
That's solvable though, whether through changing training data or RL.
> And LLMs slurped some of those together with the output of thousands of people who’d do the task worse
Theoretically fixable, then.
> But it can’t. Not definitively and consistently
Again, it can't, yet, but with better training data I don't see a fundamental impossibility here. The comparison with any magic wand is, in my opinion, disingenuous.
> If you don’t think to do regular things, you won’t be able to think to do advanced things
Humans already don't think for a myriad of critical jobs. Once expertise is achieved on a particular task, it becomes mostly mechanical.
-
Again, I agree with the original comment I was answering to in essence. I do think AI will make us dumber overall, and I sort of wish it was never invented.
But it was. And, being realistic, I will try to extract as much positive value from it as possible instead of discounting it wholly.
Only if you're less intelligent than the average. The problem with LLMs is that they will always fall to the average/mean/median of information.
And if the average person is orders of magnitude better than you at thinking, you're right... you should let the AI do it lol
Your comment is nonsensical. Have you ever used any LLM?
Ask the LLM to... I don't know, to explain to you the chemistry of aluminium oxides.
Do you really think the average human will even get remotely close to the knowledge an LLM will return to such a simple question?
Ask an LLM to amend a commit. Ask it to initialize a rails project. Have it look at a piece of C code and figure out if there are any off-by-one errors.
Then try the same to a few random people on the street.
If you think the knowledge stored in the LLM weights for any of these questions is that of the average person I don't even know what to say. You must live in some secluded community of savant polymaths.
Do you think that the average person can get a gold on the IMO?
> Humans may not need to think to just... do stuff.
God forbid we should ever have to think lol
It is concerning how some people really don't want to think about some things, and just "do".
Very Zen of you to say
Imagine if everyone got the opportunity to work on SOTA. What a world we would be.
Unfortunately that’s not where we’re headed.
We've never been there.
With AI and robotics there may be the slim chance we get closer to that.
But we won't. Not because AI, but because humans, of course.