> For 99% of tasks I'm totally certain there's people out there that are orders of magnitude better at them than me.
And LLMs slurped some of those together with the output of thousands of people who’d do the task worse, and you have no way of forcing it to be the good one every time.
> If the AI can regurgitate their thinking, my output is better.
But it can’t. Not definitively and consistently, so that hypothetical is about as meaningful as “if I had a magic wand to end world hunger, I’d use it”.
> Humans may not need to think to just... do stuff.
If you don’t think to do regular things, you won’t be able to think to do advanced things. It’s akin to any muscle; you don’t use it, it atrophies.
> And LLMs slurped some of those together with the output of thousands of people who’d do the task worse, and you have no way of forcing it to be the good one every time.
That's solvable though, whether through changing training data or RL.
> And LLMs slurped some of those together with the output of thousands of people who’d do the task worse
Theoretically fixable, then.
> But it can’t. Not definitively and consistently
Again, it can't, yet, but with better training data I don't see a fundamental impossibility here. The comparison with any magic wand is, in my opinion, disingenuous.
> If you don’t think to do regular things, you won’t be able to think to do advanced things
Humans already don't think for a myriad of critical jobs. Once expertise is achieved on a particular task, it becomes mostly mechanical.
-
Again, I agree with the original comment I was answering to in essence. I do think AI will make us dumber overall, and I sort of wish it was never invented.
But it was. And, being realistic, I will try to extract as much positive value from it as possible instead of discounting it wholly.