> Even if LLMs make mistakes, the ability of LLMs to deliver useful code and hints improved to the point most skeptics started to use LLMs anyway
Here we go again. Statements with the single source in the head of the speaker. And it’s also not true. The llms still produce bad/irrelevant code at such rate that you can spend more time prompting than doing things yourself.
I’m tired of this overestimation of llms.
My person experience: if I can find a solution on stackoverflow etc. the LLM will produce working and fundamentally correct code. If I can‘t find a already fullfilled solution on these sites, the LLM is hallucinating like crazy (newer existing functions/modules/plugins, protocol features which aren’t specified and even github-repos which never existed). So, as stated my many people online before: for low-hanging fruits LLM are totally viable solution.
I don't remember the last time Claude Code hallucinated some library, as it will check the packages, verify with the linter, run a test import and so on.
Are you talking about punching something into some LLM web chat that's disconnected from your actual codebase and has tooling like web search disabled? If so, that's not really the state of the art of AI assisted coding, just so you know.
Even where they are not directly using LLMs to write the most critical or core code, nearly every skeptic I know has started using LLMs at very least to do things like write tests, build tools, write glue code, help to debug or refactor, etc.
Your statement suffers not only from also coming only from your brain, with no evidence that you've actually tried to learn to use these tools, but it also goes against the weight of evidence that I see both in my professional network and online.
I just want people making statements like the author to be more specific how exactly the llms are being used. Otherwise they contribute to this belief that llms are a magical tool that can do anything.
I am aware of simple routine tasks that LLMs can do. This doesn’t change anything about what I said.
All you had to do is scroll down further and read the next couple of posts where the author is being more specific on how they used LLMs.
I swear, the so called critics need everything spoon fed.
Sorry, but we're way past that. It's you who need to provide examples of tasks it can't do.
You need to meet more skeptics. (Or maybe I do.) In my world, it's much more rare than you say.
But you have just repeated what you are complaining about.
Do you want me to spend time to come with a quality response to a lazy statement? It’s like fighting with windmills. I’m fine with having my say the way I did.
> Here we go again. Statements with the single source in the head of the speaker. And it’s also not true.
You're making the same sort of baseless claim you are criticising the blogger for making. Spewing baseless claims hardly moves any discussion forward.
> The llms still produce bad/irrelevant code at such rate that you can spend more time promoting than doing things yourself.
If that is your personal experience then I regret to tell you that it is only the reflection of your own inability to work with LLMs and coding agents. Meanwhile, I personally manage to effectively use LLMs anywhere between small refactoring needs and large software architecture designs, including generating fully working MVPs in one-shot agent prompts. From this alone it's rather obvious who is making baseless statements that are more aligned with reality.
> Here we go again.
Indeed, he said the same as a reflection on 2024 models:
https://news.ycombinator.com/item?id=42561151
It is always the fault of the "luser" who is not using and paying for the latest model.