Not OP but using LLMs in any professional setting, like programming, editing or writing technical specifications, OP is correct.
Without extensive promoting and injectimg my own knowledge and experience, LLMs generate absolute unusable garbage (on average). Anyone who disagrees very likely is not someone who would produce good quality work by themselves (on average). That's not a clever quip; that's a very sad reality. SO MANY people cannot be bothered to learn anything if they can help it.
The triad of LLM dependencies in my view: initiation of tasks, experience based feedback, and consequence sink. They can do none of these, they all connect to the outer context which sits with the user, not the model.
You know what? This is also not unlike hiring a human, they need the hirer party tell them what to do, give feedback, and assume the outcomes.
It's all about context which is non-fungible and distributed, not related to intelligence but to the reason we need intelligence for.
> Anyone who disagrees very likely is not someone who would produce good quality work by themselves (on average).
So for those producing slop and not knowing any better (or not caring), AI just improved the speed at which they work! Sounds like a great investment for them!
For many mastering any given craft might not be the goal, but rather just pushing stuff out the door and paying bills. A case of mismatched incentives, one might say.
I would completely disagree. I use LLMs daily for coding. They are quite far from AGI and it does not appear they are replacing Senior or Staff Engineers any time soon. But they are incredible machines that are perfectly capable of performing some economically valuable tasks in a fraction of the time it would have taken a human. If you deny this your head is in the sand.
Capable, yeah, but not reliable, that's my point. They can one shot fantastic code, or they can one shot the code I then have to review and pull my hair out over for a week, because it's such crap (and the person who pushed it is my boss, for example, so I can't just tell him to try again).
That's not consistent.
You can ask your boss to submit PRs using Codex’s “try 5 variations of the same task and select the one you like most though
Surely at that point they could write the code themselves faster than they can review 5 PRs.
Producing more slop for someone else to work through is not the solution you think it is.
Why do you frame the options as "one shot... or... one shot"?
Because lazy people will use it like that, and we are all inherently lazy
It's not much better with planning either. The amount of time I spent planning, clarifying requirements, hand-holding implementation details always offset any potential savings.
Have you never used one to hunt down an obscure bug and found the answer quicker than you likely would have yourself?
Actually, yeah, a couple of times, but that was a rubber-ducky approach; the AI said something utterly stupid, but while trying to explain things, I figured it out. I don't think an LLM has solved any difficult problem for me before. However, I think I'm likely an outlier because I do solve most issues myself anyways.