Fully agree. It takes months to learn how to use LLMs properly. There is an initial honeymoon where the LLMs blow your mind out. Then you get some disappointments. But then you start realizing that there are some things that LLMs are good at and some that they are bad at. You start creating a feel for what you can expect them to do. And more importantly, you get into the habit of splitting problems into smaller problems that the LLMs are more likely to solve. You keep learning how to best describe the problem, and you keep adjusting your prompts. It takes time.
it really doesn't take that long. Maybe if you're super junior and never coded before? In that case I'm glad its helping you get into the field. Also, if its taking you months there are whole new models that will get released and you need to learn those quirks again.
No, it's a practice. You're not necessarily building technical knowledge, rather you're building up an intuition. It's for sure not like learning a programming language. It's more like feeling your way along and figuring out how to inhabit a dwelling in the dark. We would just have to agree to disagree on this. I feel exactly as the parent commenter felt. But it's not easy to explain (or to understand from someones explanation.)
How very condescending of you.
Love this, and it's so true. A lot of people don't get this, because it's so nuanced. It's not something that's slowing you down. It's not learning a technical skill. Rather, it's building an intuition.
I find it funny when people ask me if it's true that they can build an app using an LLM without knowing how to code. I think of this... that it took me months before I started feeling like I "got it" with fitting LLMs into my coding process. So, not only do you need to learn how to code, but getting to the point that the LLM feels like a natural extension of you has its own timeline on top.
Spot on. I code for last 25+ years. It took me a while (say about a week) to start using it meaningfully. I would not still claim I am using it efficiently or have the most productive work flow, which I think is because of the fact I keep figuring out new techniques almost on a daily basis.
> There is an initial honeymoon where the LLMs blow your mind out.
What does this even mean?
In the first one and half years after ChatGPT released, when I used them there was a 100% rate, when they lied to me, I completely missed this honeymoon phase. The first time when it answered without problems was about 2 months ago. And that time was the first time when it answered one of them (ChatGPT) better than Google/Kagi/DDG could. Even yesterday, I tried to force Claude Opus to answer when is the next concert in Arena Wien, and it failed miserably. I tried other models too from Anthropic, and all failed. It successfully parsed the page of next events from the venue, then failed miserably. Sometimes it answered with events from the past, sometimes events in October. The closest was 21 August. When I asked what’s on 14 August, it said sorry, I’m right. When I asked about “events”, it simply ignored all of the movie nights. When I asked about them specifically, it was like I would have started a new conversation.
The only time when they made anything comparable to my code of quality was when they got a ton of examples of tests which looked almost the same. Even then, it made mistakes… when basically I had to change two lines, so copy pasting would have been faster.
There was an AI advocate here, who was so confident in his AI skill, that he showed something exact, which most of the people here try to avoid: recorded how he works with AIs. Here is the catch: he showed the same thing. There were already examples, he needed minimal modifications for the new code. And even then, copy pasting would have been quicker, and would have contained less mistakes… which he kept in the code, because it didn’t fail right away.
I'm glad you feel like you've nailed it. I've been using models to help me code for over two years, and I still feel like I have no idea what I'm doing.
I feel like every time I have a prompt or use a new tool, I'm experimenting with how to make fire for the first time. It's not to say that I'm bad at it. I'm probably better than most people. But knowing how to use this tool is by far the largest challenge, in my opinion.
Months? That’s actually an insanely long time
I dunno, man. I think you could have spent that time, you know, learning to code instead.
Sure. But it happens that I have 20 years of experience, and I know quite well how to code. Everything the LLM does for me I can do myself. But the LLM does that 100 times faster than me. Most of the days nowadays I push thousands of lines of code. And it's not garbage code, the LLMs write quite high quality code. Of course, I still have to go through the code and make sure it all makes sense. So I am still the bottleneck. At some point I will probably grown to trust the LLM, but I'm not quite there yet.
> Most of the days nowadays I push thousands of lines of code
Insane stuff. It’s clear you can’t review so much changes in a day, so you’re just flooding your code base with code that you barely read.
Or is your job just re-doing the same boilerplate over and over again?
You are a bit quick to jump to conclusions. With LLMs, test driven development becomes both a necessity and a pleasure. The actual functional code I push in a day is probably in the low hundreds LOC’s. But I push a lot of tests too. And sure, lots of that is boilerplate. But the tests run, pass, and if anything have better coverage than when I was writing all the code myself.
If you have 20 years of experience, then you know that number of lines of codes is always inversely proportional to code quality.
> ...thousands of lines of code ... quite high quality
A contradiction in terms.
Here’s an experiment for the two of us: we should both bookmark this page and revisit it one year from now. It is likely that at least one of us will see the world in a different way, or even both.
it is, mind you, exactly the same experience as working on a team with lots of junior engineers, and delegating work to them
Wait a minute, you didn't just claim that we have reached AGI, right? I mean, that's what it would mean to delegate work to junior engineers, right? You're delegating work to human level intelligence. That's not what we have with LLMs.
Yes and no. With junior developers you need to educate them. You need to do that with LLMs too. Maybe you need to break down the problem in smaller chunks, but you get to this after a while. But once the LLM understands the task, you get a few hundred lines of code in a mater of minutes. With a junior developer you are lucky if they come back the same day. The iteration speed with AI is simply in a different league.
Edit: it is Sunday. As I am relaxing, and spending time writing answers on HN, I keep a lazy eye on the progress of an LLM at work too. I got stuff done that would have taken me a few days of work by just clicking a "Continue" button now and then.